00:00:00.001 Started by upstream project "autotest-nightly" build number 4336 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3699 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.137 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.138 The recommended git tool is: git 00:00:00.139 using credential 00000000-0000-0000-0000-000000000002 00:00:00.140 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.178 Fetching changes from the remote Git repository 00:00:00.181 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.217 Using shallow fetch with depth 1 00:00:00.217 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.217 > git --version # timeout=10 00:00:00.250 > git --version # 'git version 2.39.2' 00:00:00.250 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.271 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.271 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.644 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.655 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.667 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.667 > git config core.sparsecheckout # timeout=10 00:00:08.678 > git read-tree -mu HEAD # timeout=10 00:00:08.694 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.720 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.720 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.882 [Pipeline] Start of Pipeline 00:00:08.897 [Pipeline] library 00:00:08.898 Loading library shm_lib@master 00:00:08.898 Library shm_lib@master is cached. Copying from home. 00:00:08.916 [Pipeline] node 00:00:08.927 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:08.929 [Pipeline] { 00:00:08.940 [Pipeline] catchError 00:00:08.941 [Pipeline] { 00:00:08.953 [Pipeline] wrap 00:00:08.960 [Pipeline] { 00:00:08.968 [Pipeline] stage 00:00:08.969 [Pipeline] { (Prologue) 00:00:08.984 [Pipeline] echo 00:00:08.986 Node: VM-host-SM0 00:00:08.991 [Pipeline] cleanWs 00:00:09.000 [WS-CLEANUP] Deleting project workspace... 00:00:09.000 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.007 [WS-CLEANUP] done 00:00:09.246 [Pipeline] setCustomBuildProperty 00:00:09.334 [Pipeline] httpRequest 00:00:10.230 [Pipeline] echo 00:00:10.232 Sorcerer 10.211.164.20 is alive 00:00:10.241 [Pipeline] retry 00:00:10.243 [Pipeline] { 00:00:10.257 [Pipeline] httpRequest 00:00:10.262 HttpMethod: GET 00:00:10.263 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.263 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.282 Response Code: HTTP/1.1 200 OK 00:00:10.283 Success: Status code 200 is in the accepted range: 200,404 00:00:10.284 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.915 [Pipeline] } 00:00:15.936 [Pipeline] // retry 00:00:15.946 [Pipeline] sh 00:00:16.235 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.255 [Pipeline] httpRequest 00:00:16.637 [Pipeline] echo 00:00:16.639 Sorcerer 10.211.164.20 is alive 00:00:16.651 [Pipeline] retry 00:00:16.653 [Pipeline] { 00:00:16.669 [Pipeline] httpRequest 00:00:16.673 HttpMethod: GET 00:00:16.674 URL: http://10.211.164.20/packages/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:16.675 Sending request to url: http://10.211.164.20/packages/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:16.690 Response Code: HTTP/1.1 200 OK 00:00:16.691 Success: Status code 200 is in the accepted range: 200,404 00:00:16.691 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:01:26.863 [Pipeline] } 00:01:26.882 [Pipeline] // retry 00:01:26.890 [Pipeline] sh 00:01:27.174 + tar --no-same-owner -xf spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:01:29.720 [Pipeline] sh 00:01:30.003 + git -C spdk log --oneline -n5 00:01:30.003 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:01:30.003 77ee034c7 bdev/nvme: Add lock to unprotected operations around attach controller 00:01:30.003 48454bb28 bdev/nvme: Add lock to unprotected operations around detach controller 00:01:30.003 4b59d7893 bdev/nvme: Use nbdev always for local nvme_bdev pointer variables 00:01:30.003 e56f1618f lib/ftl: Add explicit support for write unit sizes of base device 00:01:30.024 [Pipeline] writeFile 00:01:30.039 [Pipeline] sh 00:01:30.323 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:30.335 [Pipeline] sh 00:01:30.617 + cat autorun-spdk.conf 00:01:30.617 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.617 SPDK_TEST_NVMF=1 00:01:30.617 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.617 SPDK_TEST_VFIOUSER=1 00:01:30.617 SPDK_TEST_USDT=1 00:01:30.617 SPDK_RUN_ASAN=1 00:01:30.617 SPDK_RUN_UBSAN=1 00:01:30.617 SPDK_TEST_NVMF_MDNS=1 00:01:30.617 NET_TYPE=virt 00:01:30.617 SPDK_JSONRPC_GO_CLIENT=1 00:01:30.617 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.624 RUN_NIGHTLY=1 00:01:30.626 [Pipeline] } 00:01:30.641 [Pipeline] // stage 00:01:30.663 [Pipeline] stage 00:01:30.666 [Pipeline] { (Run VM) 00:01:30.683 [Pipeline] sh 00:01:30.970 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:30.970 + echo 'Start stage prepare_nvme.sh' 00:01:30.970 Start stage prepare_nvme.sh 00:01:30.970 + [[ -n 6 ]] 00:01:30.970 + disk_prefix=ex6 00:01:30.970 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:30.970 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:30.970 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:30.970 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.970 ++ SPDK_TEST_NVMF=1 00:01:30.970 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.970 ++ SPDK_TEST_VFIOUSER=1 00:01:30.970 ++ SPDK_TEST_USDT=1 00:01:30.970 ++ SPDK_RUN_ASAN=1 00:01:30.970 ++ SPDK_RUN_UBSAN=1 00:01:30.970 ++ SPDK_TEST_NVMF_MDNS=1 00:01:30.970 ++ NET_TYPE=virt 00:01:30.970 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:30.970 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.970 ++ RUN_NIGHTLY=1 00:01:30.970 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:30.970 + nvme_files=() 00:01:30.970 + declare -A nvme_files 00:01:30.970 + backend_dir=/var/lib/libvirt/images/backends 00:01:30.970 + nvme_files['nvme.img']=5G 00:01:30.970 + nvme_files['nvme-cmb.img']=5G 00:01:30.970 + nvme_files['nvme-multi0.img']=4G 00:01:30.970 + nvme_files['nvme-multi1.img']=4G 00:01:30.970 + nvme_files['nvme-multi2.img']=4G 00:01:30.970 + nvme_files['nvme-openstack.img']=8G 00:01:30.970 + nvme_files['nvme-zns.img']=5G 00:01:30.970 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:30.970 + (( SPDK_TEST_FTL == 1 )) 00:01:30.970 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:30.970 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:30.970 + for nvme in "${!nvme_files[@]}" 00:01:30.970 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:30.970 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:30.970 + for nvme in "${!nvme_files[@]}" 00:01:30.970 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:30.970 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.970 + for nvme in "${!nvme_files[@]}" 00:01:30.970 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:30.970 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:30.970 + for nvme in "${!nvme_files[@]}" 00:01:30.970 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:30.970 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.970 + for nvme in "${!nvme_files[@]}" 00:01:30.970 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:30.970 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:30.970 + for nvme in "${!nvme_files[@]}" 00:01:30.970 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:30.970 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.229 + for nvme in "${!nvme_files[@]}" 00:01:31.229 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:31.229 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.229 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:31.229 + echo 'End stage prepare_nvme.sh' 00:01:31.229 End stage prepare_nvme.sh 00:01:31.241 [Pipeline] sh 00:01:31.524 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:31.524 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:01:31.524 00:01:31.524 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:31.524 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:31.524 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:31.524 HELP=0 00:01:31.524 DRY_RUN=0 00:01:31.524 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:01:31.524 NVME_DISKS_TYPE=nvme,nvme, 00:01:31.524 NVME_AUTO_CREATE=0 00:01:31.524 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:01:31.524 NVME_CMB=,, 00:01:31.524 NVME_PMR=,, 00:01:31.524 NVME_ZNS=,, 00:01:31.524 NVME_MS=,, 00:01:31.524 NVME_FDP=,, 00:01:31.524 SPDK_VAGRANT_DISTRO=fedora39 00:01:31.524 SPDK_VAGRANT_VMCPU=10 00:01:31.524 SPDK_VAGRANT_VMRAM=12288 00:01:31.524 SPDK_VAGRANT_PROVIDER=libvirt 00:01:31.524 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:31.524 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:31.524 SPDK_OPENSTACK_NETWORK=0 00:01:31.524 VAGRANT_PACKAGE_BOX=0 00:01:31.524 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:31.524 FORCE_DISTRO=true 00:01:31.524 VAGRANT_BOX_VERSION= 00:01:31.524 EXTRA_VAGRANTFILES= 00:01:31.524 NIC_MODEL=e1000 00:01:31.524 00:01:31.524 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:31.524 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:34.089 Bringing machine 'default' up with 'libvirt' provider... 00:01:34.659 ==> default: Creating image (snapshot of base box volume). 00:01:34.659 ==> default: Creating domain with the following settings... 00:01:34.659 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733391201_95aca68b750026cc27fa 00:01:34.659 ==> default: -- Domain type: kvm 00:01:34.659 ==> default: -- Cpus: 10 00:01:34.659 ==> default: -- Feature: acpi 00:01:34.659 ==> default: -- Feature: apic 00:01:34.659 ==> default: -- Feature: pae 00:01:34.659 ==> default: -- Memory: 12288M 00:01:34.659 ==> default: -- Memory Backing: hugepages: 00:01:34.659 ==> default: -- Management MAC: 00:01:34.659 ==> default: -- Loader: 00:01:34.659 ==> default: -- Nvram: 00:01:34.659 ==> default: -- Base box: spdk/fedora39 00:01:34.659 ==> default: -- Storage pool: default 00:01:34.659 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733391201_95aca68b750026cc27fa.img (20G) 00:01:34.659 ==> default: -- Volume Cache: default 00:01:34.659 ==> default: -- Kernel: 00:01:34.659 ==> default: -- Initrd: 00:01:34.659 ==> default: -- Graphics Type: vnc 00:01:34.659 ==> default: -- Graphics Port: -1 00:01:34.659 ==> default: -- Graphics IP: 127.0.0.1 00:01:34.659 ==> default: -- Graphics Password: Not defined 00:01:34.659 ==> default: -- Video Type: cirrus 00:01:34.659 ==> default: -- Video VRAM: 9216 00:01:34.659 ==> default: -- Sound Type: 00:01:34.659 ==> default: -- Keymap: en-us 00:01:34.659 ==> default: -- TPM Path: 00:01:34.659 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:34.659 ==> default: -- Command line args: 00:01:34.659 ==> default: -> value=-device, 00:01:34.659 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:34.659 ==> default: -> value=-drive, 00:01:34.659 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:01:34.659 ==> default: -> value=-device, 00:01:34.659 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.659 ==> default: -> value=-device, 00:01:34.659 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:34.659 ==> default: -> value=-drive, 00:01:34.659 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:34.659 ==> default: -> value=-device, 00:01:34.659 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.659 ==> default: -> value=-drive, 00:01:34.659 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:34.659 ==> default: -> value=-device, 00:01:34.659 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.659 ==> default: -> value=-drive, 00:01:34.659 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:34.659 ==> default: -> value=-device, 00:01:34.659 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.918 ==> default: Creating shared folders metadata... 00:01:34.918 ==> default: Starting domain. 00:01:36.835 ==> default: Waiting for domain to get an IP address... 00:01:51.714 ==> default: Waiting for SSH to become available... 00:01:52.682 ==> default: Configuring and enabling network interfaces... 00:01:57.955 default: SSH address: 192.168.121.120:22 00:01:57.955 default: SSH username: vagrant 00:01:57.955 default: SSH auth method: private key 00:01:59.858 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:07.974 ==> default: Mounting SSHFS shared folder... 00:02:09.348 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:09.348 ==> default: Checking Mount.. 00:02:10.725 ==> default: Folder Successfully Mounted! 00:02:10.725 ==> default: Running provisioner: file... 00:02:11.294 default: ~/.gitconfig => .gitconfig 00:02:11.861 00:02:11.861 SUCCESS! 00:02:11.861 00:02:11.861 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:11.861 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:11.861 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:11.861 00:02:11.870 [Pipeline] } 00:02:11.888 [Pipeline] // stage 00:02:11.899 [Pipeline] dir 00:02:11.900 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:11.902 [Pipeline] { 00:02:11.915 [Pipeline] catchError 00:02:11.916 [Pipeline] { 00:02:11.929 [Pipeline] sh 00:02:12.209 + vagrant ssh-config --host vagrant 00:02:12.209 + sed -ne /^Host/,$p 00:02:12.209 + tee ssh_conf 00:02:15.513 Host vagrant 00:02:15.513 HostName 192.168.121.120 00:02:15.513 User vagrant 00:02:15.513 Port 22 00:02:15.513 UserKnownHostsFile /dev/null 00:02:15.513 StrictHostKeyChecking no 00:02:15.513 PasswordAuthentication no 00:02:15.513 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:15.513 IdentitiesOnly yes 00:02:15.513 LogLevel FATAL 00:02:15.513 ForwardAgent yes 00:02:15.513 ForwardX11 yes 00:02:15.513 00:02:15.524 [Pipeline] withEnv 00:02:15.526 [Pipeline] { 00:02:15.538 [Pipeline] sh 00:02:15.814 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:15.814 source /etc/os-release 00:02:15.814 [[ -e /image.version ]] && img=$(< /image.version) 00:02:15.814 # Minimal, systemd-like check. 00:02:15.814 if [[ -e /.dockerenv ]]; then 00:02:15.814 # Clear garbage from the node's name: 00:02:15.814 # agt-er_autotest_547-896 -> autotest_547-896 00:02:15.814 # $HOSTNAME is the actual container id 00:02:15.814 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:15.814 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:15.814 # We can assume this is a mount from a host where container is running, 00:02:15.814 # so fetch its hostname to easily identify the target swarm worker. 00:02:15.814 container="$(< /etc/hostname) ($agent)" 00:02:15.814 else 00:02:15.814 # Fallback 00:02:15.814 container=$agent 00:02:15.814 fi 00:02:15.814 fi 00:02:15.814 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:15.814 00:02:16.081 [Pipeline] } 00:02:16.095 [Pipeline] // withEnv 00:02:16.103 [Pipeline] setCustomBuildProperty 00:02:16.116 [Pipeline] stage 00:02:16.117 [Pipeline] { (Tests) 00:02:16.132 [Pipeline] sh 00:02:16.409 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:16.683 [Pipeline] sh 00:02:17.011 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:17.027 [Pipeline] timeout 00:02:17.028 Timeout set to expire in 1 hr 0 min 00:02:17.030 [Pipeline] { 00:02:17.049 [Pipeline] sh 00:02:17.327 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:17.893 HEAD is now at 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:02:17.903 [Pipeline] sh 00:02:18.179 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:18.450 [Pipeline] sh 00:02:18.734 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:19.009 [Pipeline] sh 00:02:19.290 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:19.290 ++ readlink -f spdk_repo 00:02:19.550 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:19.550 + [[ -n /home/vagrant/spdk_repo ]] 00:02:19.550 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:19.550 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:19.550 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:19.550 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:19.550 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:19.550 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:19.550 + cd /home/vagrant/spdk_repo 00:02:19.550 + source /etc/os-release 00:02:19.550 ++ NAME='Fedora Linux' 00:02:19.550 ++ VERSION='39 (Cloud Edition)' 00:02:19.550 ++ ID=fedora 00:02:19.550 ++ VERSION_ID=39 00:02:19.550 ++ VERSION_CODENAME= 00:02:19.550 ++ PLATFORM_ID=platform:f39 00:02:19.550 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:19.550 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:19.550 ++ LOGO=fedora-logo-icon 00:02:19.550 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:19.550 ++ HOME_URL=https://fedoraproject.org/ 00:02:19.550 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:19.550 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:19.550 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:19.550 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:19.550 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:19.550 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:19.550 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:19.550 ++ SUPPORT_END=2024-11-12 00:02:19.550 ++ VARIANT='Cloud Edition' 00:02:19.550 ++ VARIANT_ID=cloud 00:02:19.550 + uname -a 00:02:19.550 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:19.550 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:19.810 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:19.810 Hugepages 00:02:19.810 node hugesize free / total 00:02:19.810 node0 1048576kB 0 / 0 00:02:19.810 node0 2048kB 0 / 0 00:02:19.810 00:02:19.810 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:20.070 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:20.070 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:20.070 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:20.070 + rm -f /tmp/spdk-ld-path 00:02:20.070 + source autorun-spdk.conf 00:02:20.070 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.070 ++ SPDK_TEST_NVMF=1 00:02:20.070 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:20.070 ++ SPDK_TEST_VFIOUSER=1 00:02:20.070 ++ SPDK_TEST_USDT=1 00:02:20.070 ++ SPDK_RUN_ASAN=1 00:02:20.070 ++ SPDK_RUN_UBSAN=1 00:02:20.070 ++ SPDK_TEST_NVMF_MDNS=1 00:02:20.070 ++ NET_TYPE=virt 00:02:20.070 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:20.070 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.070 ++ RUN_NIGHTLY=1 00:02:20.070 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:20.070 + [[ -n '' ]] 00:02:20.070 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:20.070 + for M in /var/spdk/build-*-manifest.txt 00:02:20.070 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:20.070 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:20.070 + for M in /var/spdk/build-*-manifest.txt 00:02:20.070 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:20.070 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:20.070 + for M in /var/spdk/build-*-manifest.txt 00:02:20.070 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:20.070 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:20.070 ++ uname 00:02:20.070 + [[ Linux == \L\i\n\u\x ]] 00:02:20.070 + sudo dmesg -T 00:02:20.070 + sudo dmesg --clear 00:02:20.070 + dmesg_pid=5267 00:02:20.070 + [[ Fedora Linux == FreeBSD ]] 00:02:20.070 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.070 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.070 + sudo dmesg -Tw 00:02:20.070 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:20.070 + [[ -x /usr/src/fio-static/fio ]] 00:02:20.070 + export FIO_BIN=/usr/src/fio-static/fio 00:02:20.070 + FIO_BIN=/usr/src/fio-static/fio 00:02:20.070 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:20.070 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:20.070 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:20.070 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:20.070 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:20.070 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:20.070 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:20.070 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:20.070 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:20.330 09:34:07 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:20.330 09:34:07 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:20.330 09:34:07 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.330 09:34:07 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:20.330 09:34:07 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:20.330 09:34:07 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_VFIOUSER=1 00:02:20.330 09:34:07 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:02:20.330 09:34:07 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:02:20.330 09:34:07 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:20.330 09:34:07 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVMF_MDNS=1 00:02:20.330 09:34:07 -- spdk_repo/autorun-spdk.conf@9 -- $ NET_TYPE=virt 00:02:20.330 09:34:07 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_JSONRPC_GO_CLIENT=1 00:02:20.330 09:34:07 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.330 09:34:07 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:20.330 09:34:07 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:20.330 09:34:07 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:20.330 09:34:07 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:20.330 09:34:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:20.330 09:34:07 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:20.330 09:34:07 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:20.330 09:34:07 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:20.330 09:34:07 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:20.330 09:34:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.330 09:34:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.330 09:34:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.330 09:34:07 -- paths/export.sh@5 -- $ export PATH 00:02:20.330 09:34:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.330 09:34:07 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:20.330 09:34:07 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:20.330 09:34:07 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733391247.XXXXXX 00:02:20.330 09:34:07 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733391247.udCIMp 00:02:20.330 09:34:07 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:20.330 09:34:07 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:20.330 09:34:07 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:20.330 09:34:07 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:20.330 09:34:07 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:20.330 09:34:07 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:20.330 09:34:07 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:20.330 09:34:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.330 09:34:07 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:02:20.330 09:34:07 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:20.330 09:34:07 -- pm/common@17 -- $ local monitor 00:02:20.330 09:34:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.330 09:34:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.330 09:34:07 -- pm/common@25 -- $ sleep 1 00:02:20.330 09:34:07 -- pm/common@21 -- $ date +%s 00:02:20.330 09:34:07 -- pm/common@21 -- $ date +%s 00:02:20.330 09:34:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733391247 00:02:20.330 09:34:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733391247 00:02:20.330 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733391247_collect-vmstat.pm.log 00:02:20.330 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733391247_collect-cpu-load.pm.log 00:02:21.268 09:34:08 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:21.268 09:34:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:21.268 09:34:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:21.268 09:34:08 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:21.268 09:34:08 -- spdk/autobuild.sh@16 -- $ date -u 00:02:21.268 Thu Dec 5 09:34:08 AM UTC 2024 00:02:21.268 09:34:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:21.268 v25.01-pre-296-g8d3947977 00:02:21.268 09:34:08 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:21.268 09:34:08 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:21.268 09:34:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:21.268 09:34:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:21.268 09:34:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.268 ************************************ 00:02:21.268 START TEST asan 00:02:21.268 ************************************ 00:02:21.268 using asan 00:02:21.268 09:34:08 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:21.268 00:02:21.268 real 0m0.000s 00:02:21.268 user 0m0.000s 00:02:21.268 sys 0m0.000s 00:02:21.268 09:34:08 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:21.268 ************************************ 00:02:21.268 09:34:08 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:21.268 END TEST asan 00:02:21.268 ************************************ 00:02:21.268 09:34:08 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:21.268 09:34:08 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:21.268 09:34:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:21.268 09:34:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:21.268 09:34:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.527 ************************************ 00:02:21.527 START TEST ubsan 00:02:21.527 ************************************ 00:02:21.527 using ubsan 00:02:21.527 09:34:08 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:21.527 00:02:21.527 real 0m0.000s 00:02:21.527 user 0m0.000s 00:02:21.527 sys 0m0.000s 00:02:21.527 09:34:08 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:21.527 09:34:08 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:21.527 ************************************ 00:02:21.527 END TEST ubsan 00:02:21.527 ************************************ 00:02:21.527 09:34:08 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:21.527 09:34:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:21.527 09:34:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:21.527 09:34:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:21.527 09:34:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:21.527 09:34:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:21.527 09:34:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:21.527 09:34:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:21.527 09:34:08 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang --with-shared 00:02:21.786 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:21.786 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:22.045 Using 'verbs' RDMA provider 00:02:37.864 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:50.092 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:50.092 go version go1.21.1 linux/amd64 00:02:50.092 Creating mk/config.mk...done. 00:02:50.092 Creating mk/cc.flags.mk...done. 00:02:50.092 Type 'make' to build. 00:02:50.092 09:34:36 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:50.092 09:34:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:50.092 09:34:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:50.092 09:34:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.092 ************************************ 00:02:50.092 START TEST make 00:02:50.092 ************************************ 00:02:50.092 09:34:36 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:50.092 make[1]: Nothing to be done for 'all'. 00:02:51.028 The Meson build system 00:02:51.028 Version: 1.5.0 00:02:51.028 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:51.028 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:51.028 Build type: native build 00:02:51.028 Project name: libvfio-user 00:02:51.028 Project version: 0.0.1 00:02:51.028 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:51.028 C linker for the host machine: cc ld.bfd 2.40-14 00:02:51.028 Host machine cpu family: x86_64 00:02:51.028 Host machine cpu: x86_64 00:02:51.028 Run-time dependency threads found: YES 00:02:51.028 Library dl found: YES 00:02:51.028 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:51.028 Run-time dependency json-c found: YES 0.17 00:02:51.028 Run-time dependency cmocka found: YES 1.1.7 00:02:51.028 Program pytest-3 found: NO 00:02:51.028 Program flake8 found: NO 00:02:51.028 Program misspell-fixer found: NO 00:02:51.028 Program restructuredtext-lint found: NO 00:02:51.028 Program valgrind found: YES (/usr/bin/valgrind) 00:02:51.028 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:51.028 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:51.028 Compiler for C supports arguments -Wwrite-strings: YES 00:02:51.028 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:51.028 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:51.028 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:51.028 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:51.028 Build targets in project: 8 00:02:51.028 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:51.028 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:51.028 00:02:51.028 libvfio-user 0.0.1 00:02:51.028 00:02:51.028 User defined options 00:02:51.028 buildtype : debug 00:02:51.028 default_library: shared 00:02:51.028 libdir : /usr/local/lib 00:02:51.028 00:02:51.028 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:51.730 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:51.730 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:51.730 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:51.730 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:51.730 [4/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:51.730 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:51.730 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:51.730 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:51.730 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:51.730 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:51.730 [10/37] Compiling C object samples/null.p/null.c.o 00:02:51.730 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:51.730 [12/37] Compiling C object samples/client.p/client.c.o 00:02:51.730 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:51.730 [14/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:51.730 [15/37] Compiling C object samples/server.p/server.c.o 00:02:51.730 [16/37] Linking target samples/client 00:02:51.730 [17/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:52.010 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:52.010 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:52.010 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:52.010 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:52.010 [22/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:52.010 [23/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:52.010 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:52.010 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:52.010 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:52.010 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:52.010 [28/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:52.010 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:52.010 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:52.010 [31/37] Linking target test/unit_tests 00:02:52.268 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:52.268 [33/37] Linking target samples/server 00:02:52.268 [34/37] Linking target samples/null 00:02:52.268 [35/37] Linking target samples/lspci 00:02:52.268 [36/37] Linking target samples/gpio-pci-idio-16 00:02:52.268 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:52.268 INFO: autodetecting backend as ninja 00:02:52.268 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:52.268 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:52.834 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:52.834 ninja: no work to do. 00:03:02.807 The Meson build system 00:03:02.807 Version: 1.5.0 00:03:02.807 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:02.807 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:02.807 Build type: native build 00:03:02.807 Program cat found: YES (/usr/bin/cat) 00:03:02.807 Project name: DPDK 00:03:02.807 Project version: 24.03.0 00:03:02.807 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:02.807 C linker for the host machine: cc ld.bfd 2.40-14 00:03:02.807 Host machine cpu family: x86_64 00:03:02.807 Host machine cpu: x86_64 00:03:02.807 Message: ## Building in Developer Mode ## 00:03:02.807 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:02.807 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:02.807 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:02.807 Program python3 found: YES (/usr/bin/python3) 00:03:02.807 Program cat found: YES (/usr/bin/cat) 00:03:02.807 Compiler for C supports arguments -march=native: YES 00:03:02.807 Checking for size of "void *" : 8 00:03:02.807 Checking for size of "void *" : 8 (cached) 00:03:02.807 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:02.807 Library m found: YES 00:03:02.807 Library numa found: YES 00:03:02.807 Has header "numaif.h" : YES 00:03:02.807 Library fdt found: NO 00:03:02.807 Library execinfo found: NO 00:03:02.807 Has header "execinfo.h" : YES 00:03:02.807 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:02.807 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:02.807 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:02.807 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:02.807 Run-time dependency openssl found: YES 3.1.1 00:03:02.807 Run-time dependency libpcap found: YES 1.10.4 00:03:02.807 Has header "pcap.h" with dependency libpcap: YES 00:03:02.807 Compiler for C supports arguments -Wcast-qual: YES 00:03:02.807 Compiler for C supports arguments -Wdeprecated: YES 00:03:02.807 Compiler for C supports arguments -Wformat: YES 00:03:02.807 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:02.807 Compiler for C supports arguments -Wformat-security: NO 00:03:02.807 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:02.807 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:02.807 Compiler for C supports arguments -Wnested-externs: YES 00:03:02.807 Compiler for C supports arguments -Wold-style-definition: YES 00:03:02.807 Compiler for C supports arguments -Wpointer-arith: YES 00:03:02.807 Compiler for C supports arguments -Wsign-compare: YES 00:03:02.807 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:02.807 Compiler for C supports arguments -Wundef: YES 00:03:02.807 Compiler for C supports arguments -Wwrite-strings: YES 00:03:02.807 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:02.807 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:02.807 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:02.807 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:02.807 Program objdump found: YES (/usr/bin/objdump) 00:03:02.807 Compiler for C supports arguments -mavx512f: YES 00:03:02.807 Checking if "AVX512 checking" compiles: YES 00:03:02.807 Fetching value of define "__SSE4_2__" : 1 00:03:02.807 Fetching value of define "__AES__" : 1 00:03:02.807 Fetching value of define "__AVX__" : 1 00:03:02.807 Fetching value of define "__AVX2__" : 1 00:03:02.807 Fetching value of define "__AVX512BW__" : (undefined) 00:03:02.807 Fetching value of define "__AVX512CD__" : (undefined) 00:03:02.807 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:02.807 Fetching value of define "__AVX512F__" : (undefined) 00:03:02.807 Fetching value of define "__AVX512VL__" : (undefined) 00:03:02.807 Fetching value of define "__PCLMUL__" : 1 00:03:02.807 Fetching value of define "__RDRND__" : 1 00:03:02.807 Fetching value of define "__RDSEED__" : 1 00:03:02.807 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:02.807 Fetching value of define "__znver1__" : (undefined) 00:03:02.807 Fetching value of define "__znver2__" : (undefined) 00:03:02.807 Fetching value of define "__znver3__" : (undefined) 00:03:02.807 Fetching value of define "__znver4__" : (undefined) 00:03:02.807 Library asan found: YES 00:03:02.807 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:02.807 Message: lib/log: Defining dependency "log" 00:03:02.807 Message: lib/kvargs: Defining dependency "kvargs" 00:03:02.807 Message: lib/telemetry: Defining dependency "telemetry" 00:03:02.807 Library rt found: YES 00:03:02.807 Checking for function "getentropy" : NO 00:03:02.807 Message: lib/eal: Defining dependency "eal" 00:03:02.807 Message: lib/ring: Defining dependency "ring" 00:03:02.807 Message: lib/rcu: Defining dependency "rcu" 00:03:02.807 Message: lib/mempool: Defining dependency "mempool" 00:03:02.807 Message: lib/mbuf: Defining dependency "mbuf" 00:03:02.807 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:02.807 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:02.807 Compiler for C supports arguments -mpclmul: YES 00:03:02.807 Compiler for C supports arguments -maes: YES 00:03:02.807 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:02.807 Compiler for C supports arguments -mavx512bw: YES 00:03:02.807 Compiler for C supports arguments -mavx512dq: YES 00:03:02.807 Compiler for C supports arguments -mavx512vl: YES 00:03:02.807 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:02.807 Compiler for C supports arguments -mavx2: YES 00:03:02.807 Compiler for C supports arguments -mavx: YES 00:03:02.807 Message: lib/net: Defining dependency "net" 00:03:02.807 Message: lib/meter: Defining dependency "meter" 00:03:02.807 Message: lib/ethdev: Defining dependency "ethdev" 00:03:02.807 Message: lib/pci: Defining dependency "pci" 00:03:02.807 Message: lib/cmdline: Defining dependency "cmdline" 00:03:02.807 Message: lib/hash: Defining dependency "hash" 00:03:02.807 Message: lib/timer: Defining dependency "timer" 00:03:02.807 Message: lib/compressdev: Defining dependency "compressdev" 00:03:02.807 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:02.807 Message: lib/dmadev: Defining dependency "dmadev" 00:03:02.807 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:02.807 Message: lib/power: Defining dependency "power" 00:03:02.807 Message: lib/reorder: Defining dependency "reorder" 00:03:02.807 Message: lib/security: Defining dependency "security" 00:03:02.807 Has header "linux/userfaultfd.h" : YES 00:03:02.807 Has header "linux/vduse.h" : YES 00:03:02.807 Message: lib/vhost: Defining dependency "vhost" 00:03:02.807 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:02.807 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:02.807 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:02.807 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:02.807 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:02.807 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:02.807 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:02.807 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:02.807 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:02.807 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:02.807 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:02.807 Configuring doxy-api-html.conf using configuration 00:03:02.807 Configuring doxy-api-man.conf using configuration 00:03:02.807 Program mandb found: YES (/usr/bin/mandb) 00:03:02.807 Program sphinx-build found: NO 00:03:02.807 Configuring rte_build_config.h using configuration 00:03:02.807 Message: 00:03:02.807 ================= 00:03:02.807 Applications Enabled 00:03:02.807 ================= 00:03:02.807 00:03:02.807 apps: 00:03:02.807 00:03:02.807 00:03:02.807 Message: 00:03:02.807 ================= 00:03:02.807 Libraries Enabled 00:03:02.807 ================= 00:03:02.807 00:03:02.807 libs: 00:03:02.807 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:02.807 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:02.807 cryptodev, dmadev, power, reorder, security, vhost, 00:03:02.807 00:03:02.807 Message: 00:03:02.807 =============== 00:03:02.807 Drivers Enabled 00:03:02.807 =============== 00:03:02.807 00:03:02.807 common: 00:03:02.807 00:03:02.807 bus: 00:03:02.807 pci, vdev, 00:03:02.807 mempool: 00:03:02.807 ring, 00:03:02.807 dma: 00:03:02.807 00:03:02.807 net: 00:03:02.807 00:03:02.807 crypto: 00:03:02.807 00:03:02.807 compress: 00:03:02.807 00:03:02.807 vdpa: 00:03:02.807 00:03:02.807 00:03:02.807 Message: 00:03:02.807 ================= 00:03:02.807 Content Skipped 00:03:02.807 ================= 00:03:02.807 00:03:02.807 apps: 00:03:02.807 dumpcap: explicitly disabled via build config 00:03:02.807 graph: explicitly disabled via build config 00:03:02.807 pdump: explicitly disabled via build config 00:03:02.807 proc-info: explicitly disabled via build config 00:03:02.807 test-acl: explicitly disabled via build config 00:03:02.807 test-bbdev: explicitly disabled via build config 00:03:02.807 test-cmdline: explicitly disabled via build config 00:03:02.807 test-compress-perf: explicitly disabled via build config 00:03:02.807 test-crypto-perf: explicitly disabled via build config 00:03:02.807 test-dma-perf: explicitly disabled via build config 00:03:02.807 test-eventdev: explicitly disabled via build config 00:03:02.807 test-fib: explicitly disabled via build config 00:03:02.807 test-flow-perf: explicitly disabled via build config 00:03:02.807 test-gpudev: explicitly disabled via build config 00:03:02.807 test-mldev: explicitly disabled via build config 00:03:02.807 test-pipeline: explicitly disabled via build config 00:03:02.807 test-pmd: explicitly disabled via build config 00:03:02.807 test-regex: explicitly disabled via build config 00:03:02.807 test-sad: explicitly disabled via build config 00:03:02.808 test-security-perf: explicitly disabled via build config 00:03:02.808 00:03:02.808 libs: 00:03:02.808 argparse: explicitly disabled via build config 00:03:02.808 metrics: explicitly disabled via build config 00:03:02.808 acl: explicitly disabled via build config 00:03:02.808 bbdev: explicitly disabled via build config 00:03:02.808 bitratestats: explicitly disabled via build config 00:03:02.808 bpf: explicitly disabled via build config 00:03:02.808 cfgfile: explicitly disabled via build config 00:03:02.808 distributor: explicitly disabled via build config 00:03:02.808 efd: explicitly disabled via build config 00:03:02.808 eventdev: explicitly disabled via build config 00:03:02.808 dispatcher: explicitly disabled via build config 00:03:02.808 gpudev: explicitly disabled via build config 00:03:02.808 gro: explicitly disabled via build config 00:03:02.808 gso: explicitly disabled via build config 00:03:02.808 ip_frag: explicitly disabled via build config 00:03:02.808 jobstats: explicitly disabled via build config 00:03:02.808 latencystats: explicitly disabled via build config 00:03:02.808 lpm: explicitly disabled via build config 00:03:02.808 member: explicitly disabled via build config 00:03:02.808 pcapng: explicitly disabled via build config 00:03:02.808 rawdev: explicitly disabled via build config 00:03:02.808 regexdev: explicitly disabled via build config 00:03:02.808 mldev: explicitly disabled via build config 00:03:02.808 rib: explicitly disabled via build config 00:03:02.808 sched: explicitly disabled via build config 00:03:02.808 stack: explicitly disabled via build config 00:03:02.808 ipsec: explicitly disabled via build config 00:03:02.808 pdcp: explicitly disabled via build config 00:03:02.808 fib: explicitly disabled via build config 00:03:02.808 port: explicitly disabled via build config 00:03:02.808 pdump: explicitly disabled via build config 00:03:02.808 table: explicitly disabled via build config 00:03:02.808 pipeline: explicitly disabled via build config 00:03:02.808 graph: explicitly disabled via build config 00:03:02.808 node: explicitly disabled via build config 00:03:02.808 00:03:02.808 drivers: 00:03:02.808 common/cpt: not in enabled drivers build config 00:03:02.808 common/dpaax: not in enabled drivers build config 00:03:02.808 common/iavf: not in enabled drivers build config 00:03:02.808 common/idpf: not in enabled drivers build config 00:03:02.808 common/ionic: not in enabled drivers build config 00:03:02.808 common/mvep: not in enabled drivers build config 00:03:02.808 common/octeontx: not in enabled drivers build config 00:03:02.808 bus/auxiliary: not in enabled drivers build config 00:03:02.808 bus/cdx: not in enabled drivers build config 00:03:02.808 bus/dpaa: not in enabled drivers build config 00:03:02.808 bus/fslmc: not in enabled drivers build config 00:03:02.808 bus/ifpga: not in enabled drivers build config 00:03:02.808 bus/platform: not in enabled drivers build config 00:03:02.808 bus/uacce: not in enabled drivers build config 00:03:02.808 bus/vmbus: not in enabled drivers build config 00:03:02.808 common/cnxk: not in enabled drivers build config 00:03:02.808 common/mlx5: not in enabled drivers build config 00:03:02.808 common/nfp: not in enabled drivers build config 00:03:02.808 common/nitrox: not in enabled drivers build config 00:03:02.808 common/qat: not in enabled drivers build config 00:03:02.808 common/sfc_efx: not in enabled drivers build config 00:03:02.808 mempool/bucket: not in enabled drivers build config 00:03:02.808 mempool/cnxk: not in enabled drivers build config 00:03:02.808 mempool/dpaa: not in enabled drivers build config 00:03:02.808 mempool/dpaa2: not in enabled drivers build config 00:03:02.808 mempool/octeontx: not in enabled drivers build config 00:03:02.808 mempool/stack: not in enabled drivers build config 00:03:02.808 dma/cnxk: not in enabled drivers build config 00:03:02.808 dma/dpaa: not in enabled drivers build config 00:03:02.808 dma/dpaa2: not in enabled drivers build config 00:03:02.808 dma/hisilicon: not in enabled drivers build config 00:03:02.808 dma/idxd: not in enabled drivers build config 00:03:02.808 dma/ioat: not in enabled drivers build config 00:03:02.808 dma/skeleton: not in enabled drivers build config 00:03:02.808 net/af_packet: not in enabled drivers build config 00:03:02.808 net/af_xdp: not in enabled drivers build config 00:03:02.808 net/ark: not in enabled drivers build config 00:03:02.808 net/atlantic: not in enabled drivers build config 00:03:02.808 net/avp: not in enabled drivers build config 00:03:02.808 net/axgbe: not in enabled drivers build config 00:03:02.808 net/bnx2x: not in enabled drivers build config 00:03:02.808 net/bnxt: not in enabled drivers build config 00:03:02.808 net/bonding: not in enabled drivers build config 00:03:02.808 net/cnxk: not in enabled drivers build config 00:03:02.808 net/cpfl: not in enabled drivers build config 00:03:02.808 net/cxgbe: not in enabled drivers build config 00:03:02.808 net/dpaa: not in enabled drivers build config 00:03:02.808 net/dpaa2: not in enabled drivers build config 00:03:02.808 net/e1000: not in enabled drivers build config 00:03:02.808 net/ena: not in enabled drivers build config 00:03:02.808 net/enetc: not in enabled drivers build config 00:03:02.808 net/enetfec: not in enabled drivers build config 00:03:02.808 net/enic: not in enabled drivers build config 00:03:02.808 net/failsafe: not in enabled drivers build config 00:03:02.808 net/fm10k: not in enabled drivers build config 00:03:02.808 net/gve: not in enabled drivers build config 00:03:02.808 net/hinic: not in enabled drivers build config 00:03:02.808 net/hns3: not in enabled drivers build config 00:03:02.808 net/i40e: not in enabled drivers build config 00:03:02.808 net/iavf: not in enabled drivers build config 00:03:02.808 net/ice: not in enabled drivers build config 00:03:02.808 net/idpf: not in enabled drivers build config 00:03:02.808 net/igc: not in enabled drivers build config 00:03:02.808 net/ionic: not in enabled drivers build config 00:03:02.808 net/ipn3ke: not in enabled drivers build config 00:03:02.808 net/ixgbe: not in enabled drivers build config 00:03:02.808 net/mana: not in enabled drivers build config 00:03:02.808 net/memif: not in enabled drivers build config 00:03:02.808 net/mlx4: not in enabled drivers build config 00:03:02.808 net/mlx5: not in enabled drivers build config 00:03:02.808 net/mvneta: not in enabled drivers build config 00:03:02.808 net/mvpp2: not in enabled drivers build config 00:03:02.808 net/netvsc: not in enabled drivers build config 00:03:02.808 net/nfb: not in enabled drivers build config 00:03:02.808 net/nfp: not in enabled drivers build config 00:03:02.808 net/ngbe: not in enabled drivers build config 00:03:02.808 net/null: not in enabled drivers build config 00:03:02.808 net/octeontx: not in enabled drivers build config 00:03:02.808 net/octeon_ep: not in enabled drivers build config 00:03:02.808 net/pcap: not in enabled drivers build config 00:03:02.808 net/pfe: not in enabled drivers build config 00:03:02.808 net/qede: not in enabled drivers build config 00:03:02.808 net/ring: not in enabled drivers build config 00:03:02.808 net/sfc: not in enabled drivers build config 00:03:02.808 net/softnic: not in enabled drivers build config 00:03:02.808 net/tap: not in enabled drivers build config 00:03:02.808 net/thunderx: not in enabled drivers build config 00:03:02.808 net/txgbe: not in enabled drivers build config 00:03:02.808 net/vdev_netvsc: not in enabled drivers build config 00:03:02.808 net/vhost: not in enabled drivers build config 00:03:02.808 net/virtio: not in enabled drivers build config 00:03:02.808 net/vmxnet3: not in enabled drivers build config 00:03:02.808 raw/*: missing internal dependency, "rawdev" 00:03:02.808 crypto/armv8: not in enabled drivers build config 00:03:02.808 crypto/bcmfs: not in enabled drivers build config 00:03:02.808 crypto/caam_jr: not in enabled drivers build config 00:03:02.808 crypto/ccp: not in enabled drivers build config 00:03:02.808 crypto/cnxk: not in enabled drivers build config 00:03:02.808 crypto/dpaa_sec: not in enabled drivers build config 00:03:02.808 crypto/dpaa2_sec: not in enabled drivers build config 00:03:02.808 crypto/ipsec_mb: not in enabled drivers build config 00:03:02.808 crypto/mlx5: not in enabled drivers build config 00:03:02.808 crypto/mvsam: not in enabled drivers build config 00:03:02.808 crypto/nitrox: not in enabled drivers build config 00:03:02.808 crypto/null: not in enabled drivers build config 00:03:02.808 crypto/octeontx: not in enabled drivers build config 00:03:02.808 crypto/openssl: not in enabled drivers build config 00:03:02.808 crypto/scheduler: not in enabled drivers build config 00:03:02.808 crypto/uadk: not in enabled drivers build config 00:03:02.808 crypto/virtio: not in enabled drivers build config 00:03:02.808 compress/isal: not in enabled drivers build config 00:03:02.808 compress/mlx5: not in enabled drivers build config 00:03:02.808 compress/nitrox: not in enabled drivers build config 00:03:02.808 compress/octeontx: not in enabled drivers build config 00:03:02.808 compress/zlib: not in enabled drivers build config 00:03:02.808 regex/*: missing internal dependency, "regexdev" 00:03:02.808 ml/*: missing internal dependency, "mldev" 00:03:02.808 vdpa/ifc: not in enabled drivers build config 00:03:02.808 vdpa/mlx5: not in enabled drivers build config 00:03:02.808 vdpa/nfp: not in enabled drivers build config 00:03:02.808 vdpa/sfc: not in enabled drivers build config 00:03:02.808 event/*: missing internal dependency, "eventdev" 00:03:02.808 baseband/*: missing internal dependency, "bbdev" 00:03:02.808 gpu/*: missing internal dependency, "gpudev" 00:03:02.808 00:03:02.808 00:03:03.067 Build targets in project: 85 00:03:03.067 00:03:03.067 DPDK 24.03.0 00:03:03.067 00:03:03.067 User defined options 00:03:03.067 buildtype : debug 00:03:03.067 default_library : shared 00:03:03.067 libdir : lib 00:03:03.067 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:03.067 b_sanitize : address 00:03:03.067 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:03.067 c_link_args : 00:03:03.067 cpu_instruction_set: native 00:03:03.067 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:03.067 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:03.067 enable_docs : false 00:03:03.067 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:03.067 enable_kmods : false 00:03:03.067 max_lcores : 128 00:03:03.067 tests : false 00:03:03.067 00:03:03.067 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:03.633 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:03.633 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:03.633 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:03.633 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:03.633 [4/268] Linking static target lib/librte_kvargs.a 00:03:03.633 [5/268] Linking static target lib/librte_log.a 00:03:03.633 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:04.199 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.199 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:04.199 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:04.458 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:04.458 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:04.458 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:04.458 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:04.458 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:04.458 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:04.716 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:04.716 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:04.716 [18/268] Linking static target lib/librte_telemetry.a 00:03:04.716 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.716 [20/268] Linking target lib/librte_log.so.24.1 00:03:04.974 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:04.975 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:04.975 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:05.233 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:05.233 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:05.491 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:05.491 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:05.491 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:05.491 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:05.491 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:05.491 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.750 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:05.750 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:05.750 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:05.750 [35/268] Linking target lib/librte_telemetry.so.24.1 00:03:06.009 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:06.009 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:06.268 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:06.268 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:06.268 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:06.268 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:06.268 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:06.268 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:06.526 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:06.526 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:06.785 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:06.785 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:06.785 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:06.785 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:07.043 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:07.043 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:07.043 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:07.301 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:07.301 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:07.301 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:07.301 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:07.868 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:07.868 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:07.868 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:07.868 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:07.868 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:07.868 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:08.126 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:08.126 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:08.126 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:08.126 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:08.126 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:08.691 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:08.691 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:08.691 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:08.691 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:08.691 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:08.691 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:08.691 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:08.691 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:08.691 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:08.950 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:08.950 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:08.950 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:09.208 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:09.467 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:09.467 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:09.467 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:09.467 [84/268] Linking static target lib/librte_ring.a 00:03:09.467 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:09.727 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:09.727 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:09.727 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:09.727 [89/268] Linking static target lib/librte_eal.a 00:03:09.727 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:09.727 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:09.727 [92/268] Linking static target lib/librte_rcu.a 00:03:09.985 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:09.985 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.244 [95/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:10.244 [96/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:10.244 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:10.244 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.244 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:10.244 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:10.244 [101/268] Linking static target lib/librte_mempool.a 00:03:10.503 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:10.503 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:10.761 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:10.761 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:10.761 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:10.761 [107/268] Linking static target lib/librte_net.a 00:03:11.327 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:11.327 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:11.327 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:11.327 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:11.327 [112/268] Linking static target lib/librte_mbuf.a 00:03:11.327 [113/268] Linking static target lib/librte_meter.a 00:03:11.327 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.327 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:11.586 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:11.586 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.586 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.844 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:11.844 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:12.103 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:12.360 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.360 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:12.618 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:12.618 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:12.618 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:12.875 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:12.875 [128/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:12.875 [129/268] Linking static target lib/librte_pci.a 00:03:12.875 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:12.875 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:12.875 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:13.132 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:13.132 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:13.132 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:13.132 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.132 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:13.132 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:13.132 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:13.132 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:13.132 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:13.132 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:13.389 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:13.389 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:13.389 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:13.389 [146/268] Linking static target lib/librte_cmdline.a 00:03:13.645 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:13.645 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:13.901 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:13.902 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:14.181 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:14.181 [152/268] Linking static target lib/librte_timer.a 00:03:14.181 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:14.181 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:14.437 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:14.694 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:14.694 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:14.694 [158/268] Linking static target lib/librte_ethdev.a 00:03:14.694 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:14.694 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.694 [161/268] Linking static target lib/librte_compressdev.a 00:03:14.952 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:14.952 [163/268] Linking static target lib/librte_hash.a 00:03:14.952 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:14.952 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:15.211 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.211 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:15.211 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:15.469 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:15.469 [170/268] Linking static target lib/librte_dmadev.a 00:03:15.469 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:15.727 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:15.727 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:15.727 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.001 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.001 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:16.001 [177/268] Linking static target lib/librte_cryptodev.a 00:03:16.001 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:16.276 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:16.276 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:16.276 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.276 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:16.276 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:16.276 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:16.845 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:16.845 [186/268] Linking static target lib/librte_power.a 00:03:17.104 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:17.104 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:17.104 [189/268] Linking static target lib/librte_reorder.a 00:03:17.104 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:17.104 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:17.104 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:17.104 [193/268] Linking static target lib/librte_security.a 00:03:17.672 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.672 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:17.929 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.185 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.185 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:18.185 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:18.455 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:18.455 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:18.712 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.712 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:18.712 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:18.970 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:18.970 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:18.970 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:19.228 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:19.228 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:19.486 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:19.486 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:19.486 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:19.486 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:19.744 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:19.744 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:19.744 [216/268] Linking static target drivers/librte_bus_vdev.a 00:03:19.744 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:19.744 [218/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:19.744 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:19.744 [220/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:19.744 [221/268] Linking static target drivers/librte_bus_pci.a 00:03:20.003 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:20.003 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:20.003 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:20.003 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:20.003 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.263 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.831 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:21.398 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.398 [230/268] Linking target lib/librte_eal.so.24.1 00:03:21.657 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:21.657 [232/268] Linking target lib/librte_pci.so.24.1 00:03:21.657 [233/268] Linking target lib/librte_meter.so.24.1 00:03:21.657 [234/268] Linking target lib/librte_ring.so.24.1 00:03:21.657 [235/268] Linking target lib/librte_timer.so.24.1 00:03:21.657 [236/268] Linking target lib/librte_dmadev.so.24.1 00:03:21.657 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:21.657 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:21.657 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:21.657 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:21.657 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:21.657 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:21.915 [243/268] Linking target lib/librte_mempool.so.24.1 00:03:21.915 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:21.915 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:21.915 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:21.915 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:21.915 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:21.915 [249/268] Linking target lib/librte_mbuf.so.24.1 00:03:22.174 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:22.174 [251/268] Linking target lib/librte_net.so.24.1 00:03:22.174 [252/268] Linking target lib/librte_compressdev.so.24.1 00:03:22.174 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:03:22.174 [254/268] Linking target lib/librte_reorder.so.24.1 00:03:22.433 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:22.433 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:22.433 [257/268] Linking target lib/librte_hash.so.24.1 00:03:22.433 [258/268] Linking target lib/librte_cmdline.so.24.1 00:03:22.433 [259/268] Linking target lib/librte_security.so.24.1 00:03:22.433 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.433 [261/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:22.692 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:22.692 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:22.692 [264/268] Linking target lib/librte_power.so.24.1 00:03:24.070 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:24.070 [266/268] Linking static target lib/librte_vhost.a 00:03:25.975 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.975 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:25.975 INFO: autodetecting backend as ninja 00:03:25.975 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:47.929 CC lib/ut_mock/mock.o 00:03:47.929 CC lib/log/log.o 00:03:47.929 CC lib/log/log_flags.o 00:03:47.929 CC lib/log/log_deprecated.o 00:03:47.929 CC lib/ut/ut.o 00:03:47.929 LIB libspdk_log.a 00:03:47.929 LIB libspdk_ut_mock.a 00:03:47.929 LIB libspdk_ut.a 00:03:47.929 SO libspdk_ut.so.2.0 00:03:47.929 SO libspdk_ut_mock.so.6.0 00:03:47.929 SO libspdk_log.so.7.1 00:03:47.929 SYMLINK libspdk_ut_mock.so 00:03:47.929 SYMLINK libspdk_ut.so 00:03:47.929 SYMLINK libspdk_log.so 00:03:47.929 CC lib/dma/dma.o 00:03:47.929 CC lib/ioat/ioat.o 00:03:47.929 CXX lib/trace_parser/trace.o 00:03:47.929 CC lib/util/base64.o 00:03:47.929 CC lib/util/bit_array.o 00:03:47.929 CC lib/util/cpuset.o 00:03:47.929 CC lib/util/crc32.o 00:03:47.929 CC lib/util/crc32c.o 00:03:47.929 CC lib/util/crc16.o 00:03:47.929 CC lib/vfio_user/host/vfio_user_pci.o 00:03:47.929 CC lib/util/crc32_ieee.o 00:03:47.929 CC lib/util/crc64.o 00:03:47.929 CC lib/vfio_user/host/vfio_user.o 00:03:47.929 CC lib/util/dif.o 00:03:47.929 LIB libspdk_dma.a 00:03:47.929 CC lib/util/fd.o 00:03:47.929 SO libspdk_dma.so.5.0 00:03:47.929 CC lib/util/fd_group.o 00:03:47.929 CC lib/util/file.o 00:03:47.929 CC lib/util/hexlify.o 00:03:47.929 SYMLINK libspdk_dma.so 00:03:47.929 CC lib/util/iov.o 00:03:47.929 LIB libspdk_ioat.a 00:03:47.929 CC lib/util/math.o 00:03:47.929 SO libspdk_ioat.so.7.0 00:03:47.929 CC lib/util/net.o 00:03:47.929 LIB libspdk_vfio_user.a 00:03:47.929 SYMLINK libspdk_ioat.so 00:03:47.929 CC lib/util/pipe.o 00:03:47.929 SO libspdk_vfio_user.so.5.0 00:03:47.929 CC lib/util/strerror_tls.o 00:03:47.929 SYMLINK libspdk_vfio_user.so 00:03:47.929 CC lib/util/string.o 00:03:47.929 CC lib/util/uuid.o 00:03:47.929 CC lib/util/xor.o 00:03:47.929 CC lib/util/zipf.o 00:03:47.929 CC lib/util/md5.o 00:03:47.929 LIB libspdk_util.a 00:03:47.929 SO libspdk_util.so.10.1 00:03:47.929 LIB libspdk_trace_parser.a 00:03:47.929 SO libspdk_trace_parser.so.6.0 00:03:47.929 SYMLINK libspdk_util.so 00:03:47.929 SYMLINK libspdk_trace_parser.so 00:03:47.929 CC lib/idxd/idxd.o 00:03:47.929 CC lib/idxd/idxd_user.o 00:03:47.929 CC lib/idxd/idxd_kernel.o 00:03:47.929 CC lib/conf/conf.o 00:03:47.929 CC lib/json/json_util.o 00:03:47.929 CC lib/json/json_parse.o 00:03:47.929 CC lib/json/json_write.o 00:03:47.929 CC lib/env_dpdk/env.o 00:03:47.929 CC lib/rdma_utils/rdma_utils.o 00:03:47.929 CC lib/vmd/vmd.o 00:03:47.929 CC lib/vmd/led.o 00:03:47.929 LIB libspdk_conf.a 00:03:47.929 SO libspdk_conf.so.6.0 00:03:47.929 CC lib/env_dpdk/memory.o 00:03:47.929 CC lib/env_dpdk/pci.o 00:03:47.929 CC lib/env_dpdk/init.o 00:03:47.929 SYMLINK libspdk_conf.so 00:03:47.929 CC lib/env_dpdk/threads.o 00:03:47.929 LIB libspdk_rdma_utils.a 00:03:47.929 CC lib/env_dpdk/pci_ioat.o 00:03:47.929 LIB libspdk_json.a 00:03:47.929 SO libspdk_rdma_utils.so.1.0 00:03:47.929 SO libspdk_json.so.6.0 00:03:47.929 SYMLINK libspdk_rdma_utils.so 00:03:47.929 CC lib/env_dpdk/pci_virtio.o 00:03:47.929 CC lib/env_dpdk/pci_vmd.o 00:03:47.929 SYMLINK libspdk_json.so 00:03:47.929 CC lib/env_dpdk/pci_idxd.o 00:03:47.929 CC lib/env_dpdk/pci_event.o 00:03:47.929 CC lib/env_dpdk/sigbus_handler.o 00:03:47.929 CC lib/env_dpdk/pci_dpdk.o 00:03:47.929 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:47.929 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:47.929 LIB libspdk_idxd.a 00:03:47.929 SO libspdk_idxd.so.12.1 00:03:47.929 LIB libspdk_vmd.a 00:03:47.929 SYMLINK libspdk_idxd.so 00:03:47.929 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:47.929 CC lib/jsonrpc/jsonrpc_server.o 00:03:47.929 CC lib/jsonrpc/jsonrpc_client.o 00:03:47.929 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:47.929 CC lib/rdma_provider/common.o 00:03:47.929 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:47.929 SO libspdk_vmd.so.6.0 00:03:47.929 SYMLINK libspdk_vmd.so 00:03:48.187 LIB libspdk_rdma_provider.a 00:03:48.187 SO libspdk_rdma_provider.so.7.0 00:03:48.187 LIB libspdk_jsonrpc.a 00:03:48.187 SO libspdk_jsonrpc.so.6.0 00:03:48.187 SYMLINK libspdk_rdma_provider.so 00:03:48.187 SYMLINK libspdk_jsonrpc.so 00:03:48.446 CC lib/rpc/rpc.o 00:03:48.704 LIB libspdk_env_dpdk.a 00:03:48.705 LIB libspdk_rpc.a 00:03:48.963 SO libspdk_rpc.so.6.0 00:03:48.963 SO libspdk_env_dpdk.so.15.1 00:03:48.963 SYMLINK libspdk_rpc.so 00:03:48.963 SYMLINK libspdk_env_dpdk.so 00:03:49.221 CC lib/notify/notify_rpc.o 00:03:49.221 CC lib/notify/notify.o 00:03:49.221 CC lib/trace/trace.o 00:03:49.221 CC lib/trace/trace_flags.o 00:03:49.221 CC lib/trace/trace_rpc.o 00:03:49.221 CC lib/keyring/keyring.o 00:03:49.221 CC lib/keyring/keyring_rpc.o 00:03:49.480 LIB libspdk_notify.a 00:03:49.480 SO libspdk_notify.so.6.0 00:03:49.480 SYMLINK libspdk_notify.so 00:03:49.480 LIB libspdk_keyring.a 00:03:49.480 LIB libspdk_trace.a 00:03:49.480 SO libspdk_keyring.so.2.0 00:03:49.480 SO libspdk_trace.so.11.0 00:03:49.480 SYMLINK libspdk_keyring.so 00:03:49.480 SYMLINK libspdk_trace.so 00:03:49.739 CC lib/sock/sock.o 00:03:49.739 CC lib/sock/sock_rpc.o 00:03:49.739 CC lib/thread/thread.o 00:03:49.739 CC lib/thread/iobuf.o 00:03:50.675 LIB libspdk_sock.a 00:03:50.675 SO libspdk_sock.so.10.0 00:03:50.675 SYMLINK libspdk_sock.so 00:03:50.934 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:50.934 CC lib/nvme/nvme_ctrlr.o 00:03:50.934 CC lib/nvme/nvme_fabric.o 00:03:50.934 CC lib/nvme/nvme_ns_cmd.o 00:03:50.934 CC lib/nvme/nvme_ns.o 00:03:50.934 CC lib/nvme/nvme_qpair.o 00:03:50.935 CC lib/nvme/nvme_pcie.o 00:03:50.935 CC lib/nvme/nvme_pcie_common.o 00:03:50.935 CC lib/nvme/nvme.o 00:03:51.873 CC lib/nvme/nvme_quirks.o 00:03:51.873 CC lib/nvme/nvme_transport.o 00:03:51.873 LIB libspdk_thread.a 00:03:51.873 CC lib/nvme/nvme_discovery.o 00:03:51.873 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:51.873 SO libspdk_thread.so.11.0 00:03:51.873 SYMLINK libspdk_thread.so 00:03:51.873 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:51.873 CC lib/nvme/nvme_tcp.o 00:03:51.873 CC lib/nvme/nvme_opal.o 00:03:52.132 CC lib/accel/accel.o 00:03:52.132 CC lib/accel/accel_rpc.o 00:03:52.391 CC lib/accel/accel_sw.o 00:03:52.391 CC lib/nvme/nvme_io_msg.o 00:03:52.391 CC lib/nvme/nvme_poll_group.o 00:03:52.649 CC lib/nvme/nvme_zns.o 00:03:52.649 CC lib/nvme/nvme_stubs.o 00:03:52.649 CC lib/blob/blobstore.o 00:03:52.649 CC lib/blob/request.o 00:03:52.649 CC lib/init/json_config.o 00:03:52.908 CC lib/init/subsystem.o 00:03:53.166 CC lib/blob/zeroes.o 00:03:53.166 CC lib/init/subsystem_rpc.o 00:03:53.166 CC lib/init/rpc.o 00:03:53.166 CC lib/virtio/virtio.o 00:03:53.166 CC lib/virtio/virtio_vhost_user.o 00:03:53.425 CC lib/vfu_tgt/tgt_endpoint.o 00:03:53.425 CC lib/vfu_tgt/tgt_rpc.o 00:03:53.425 CC lib/fsdev/fsdev.o 00:03:53.425 LIB libspdk_init.a 00:03:53.425 SO libspdk_init.so.6.0 00:03:53.425 LIB libspdk_accel.a 00:03:53.425 SO libspdk_accel.so.16.0 00:03:53.425 CC lib/virtio/virtio_vfio_user.o 00:03:53.425 SYMLINK libspdk_init.so 00:03:53.425 CC lib/fsdev/fsdev_io.o 00:03:53.683 SYMLINK libspdk_accel.so 00:03:53.683 CC lib/virtio/virtio_pci.o 00:03:53.683 CC lib/nvme/nvme_auth.o 00:03:53.683 CC lib/blob/blob_bs_dev.o 00:03:53.683 LIB libspdk_vfu_tgt.a 00:03:53.683 CC lib/fsdev/fsdev_rpc.o 00:03:53.683 SO libspdk_vfu_tgt.so.3.0 00:03:53.942 SYMLINK libspdk_vfu_tgt.so 00:03:53.942 CC lib/nvme/nvme_cuse.o 00:03:53.942 CC lib/nvme/nvme_vfio_user.o 00:03:53.942 CC lib/event/app.o 00:03:53.942 CC lib/event/reactor.o 00:03:53.942 CC lib/bdev/bdev.o 00:03:53.942 CC lib/bdev/bdev_rpc.o 00:03:53.942 LIB libspdk_virtio.a 00:03:53.942 SO libspdk_virtio.so.7.0 00:03:54.202 LIB libspdk_fsdev.a 00:03:54.202 SYMLINK libspdk_virtio.so 00:03:54.202 CC lib/bdev/bdev_zone.o 00:03:54.202 SO libspdk_fsdev.so.2.0 00:03:54.202 SYMLINK libspdk_fsdev.so 00:03:54.202 CC lib/nvme/nvme_rdma.o 00:03:54.202 CC lib/bdev/part.o 00:03:54.202 CC lib/bdev/scsi_nvme.o 00:03:54.461 CC lib/event/log_rpc.o 00:03:54.461 CC lib/event/app_rpc.o 00:03:54.461 CC lib/event/scheduler_static.o 00:03:54.720 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:54.720 LIB libspdk_event.a 00:03:54.979 SO libspdk_event.so.14.0 00:03:54.979 SYMLINK libspdk_event.so 00:03:55.547 LIB libspdk_fuse_dispatcher.a 00:03:55.547 SO libspdk_fuse_dispatcher.so.1.0 00:03:55.547 SYMLINK libspdk_fuse_dispatcher.so 00:03:55.805 LIB libspdk_nvme.a 00:03:56.064 SO libspdk_nvme.so.15.0 00:03:56.324 SYMLINK libspdk_nvme.so 00:03:56.583 LIB libspdk_blob.a 00:03:56.583 SO libspdk_blob.so.12.0 00:03:56.843 SYMLINK libspdk_blob.so 00:03:56.843 CC lib/blobfs/blobfs.o 00:03:56.843 CC lib/blobfs/tree.o 00:03:56.843 CC lib/lvol/lvol.o 00:03:57.102 LIB libspdk_bdev.a 00:03:57.361 SO libspdk_bdev.so.17.0 00:03:57.361 SYMLINK libspdk_bdev.so 00:03:57.620 CC lib/nbd/nbd.o 00:03:57.620 CC lib/scsi/dev.o 00:03:57.620 CC lib/ftl/ftl_core.o 00:03:57.620 CC lib/scsi/lun.o 00:03:57.620 CC lib/nbd/nbd_rpc.o 00:03:57.620 CC lib/scsi/port.o 00:03:57.620 CC lib/nvmf/ctrlr.o 00:03:57.620 CC lib/ublk/ublk.o 00:03:57.885 CC lib/scsi/scsi.o 00:03:57.885 CC lib/ftl/ftl_init.o 00:03:57.885 CC lib/nvmf/ctrlr_discovery.o 00:03:57.885 CC lib/ftl/ftl_layout.o 00:03:57.885 CC lib/scsi/scsi_bdev.o 00:03:58.145 CC lib/nvmf/ctrlr_bdev.o 00:03:58.145 LIB libspdk_nbd.a 00:03:58.145 LIB libspdk_blobfs.a 00:03:58.145 LIB libspdk_lvol.a 00:03:58.145 SO libspdk_nbd.so.7.0 00:03:58.145 SO libspdk_lvol.so.11.0 00:03:58.145 SO libspdk_blobfs.so.11.0 00:03:58.145 CC lib/nvmf/subsystem.o 00:03:58.145 SYMLINK libspdk_nbd.so 00:03:58.145 CC lib/nvmf/nvmf.o 00:03:58.145 SYMLINK libspdk_lvol.so 00:03:58.145 CC lib/nvmf/nvmf_rpc.o 00:03:58.145 SYMLINK libspdk_blobfs.so 00:03:58.145 CC lib/nvmf/transport.o 00:03:58.403 CC lib/ftl/ftl_debug.o 00:03:58.403 CC lib/ublk/ublk_rpc.o 00:03:58.403 CC lib/ftl/ftl_io.o 00:03:58.662 CC lib/scsi/scsi_pr.o 00:03:58.662 CC lib/ftl/ftl_sb.o 00:03:58.662 LIB libspdk_ublk.a 00:03:58.662 SO libspdk_ublk.so.3.0 00:03:58.662 CC lib/ftl/ftl_l2p.o 00:03:58.662 SYMLINK libspdk_ublk.so 00:03:58.662 CC lib/ftl/ftl_l2p_flat.o 00:03:58.922 CC lib/ftl/ftl_nv_cache.o 00:03:58.922 CC lib/nvmf/tcp.o 00:03:58.922 CC lib/scsi/scsi_rpc.o 00:03:58.922 CC lib/scsi/task.o 00:03:58.922 CC lib/ftl/ftl_band.o 00:03:58.922 CC lib/nvmf/stubs.o 00:03:59.181 CC lib/nvmf/mdns_server.o 00:03:59.181 CC lib/nvmf/vfio_user.o 00:03:59.181 LIB libspdk_scsi.a 00:03:59.181 CC lib/nvmf/rdma.o 00:03:59.181 SO libspdk_scsi.so.9.0 00:03:59.439 SYMLINK libspdk_scsi.so 00:03:59.439 CC lib/nvmf/auth.o 00:03:59.439 CC lib/ftl/ftl_band_ops.o 00:03:59.698 CC lib/ftl/ftl_writer.o 00:03:59.698 CC lib/iscsi/conn.o 00:03:59.698 CC lib/ftl/ftl_rq.o 00:03:59.698 CC lib/vhost/vhost.o 00:03:59.956 CC lib/vhost/vhost_rpc.o 00:03:59.956 CC lib/vhost/vhost_scsi.o 00:03:59.956 CC lib/ftl/ftl_reloc.o 00:03:59.956 CC lib/iscsi/init_grp.o 00:04:00.215 CC lib/iscsi/iscsi.o 00:04:00.215 CC lib/iscsi/param.o 00:04:00.474 CC lib/ftl/ftl_l2p_cache.o 00:04:00.474 CC lib/ftl/ftl_p2l.o 00:04:00.733 CC lib/iscsi/portal_grp.o 00:04:00.733 CC lib/iscsi/tgt_node.o 00:04:00.733 CC lib/iscsi/iscsi_subsystem.o 00:04:00.733 CC lib/iscsi/iscsi_rpc.o 00:04:00.991 CC lib/iscsi/task.o 00:04:00.991 CC lib/ftl/ftl_p2l_log.o 00:04:00.991 CC lib/vhost/vhost_blk.o 00:04:00.991 CC lib/vhost/rte_vhost_user.o 00:04:00.992 CC lib/ftl/mngt/ftl_mngt.o 00:04:01.250 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:01.250 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:01.250 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:01.250 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:01.250 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:01.510 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:01.510 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:01.510 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:01.510 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:01.510 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:01.510 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:01.769 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:01.769 CC lib/ftl/utils/ftl_conf.o 00:04:01.769 CC lib/ftl/utils/ftl_md.o 00:04:01.769 LIB libspdk_nvmf.a 00:04:01.769 CC lib/ftl/utils/ftl_mempool.o 00:04:01.769 CC lib/ftl/utils/ftl_bitmap.o 00:04:01.769 CC lib/ftl/utils/ftl_property.o 00:04:02.028 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:02.028 SO libspdk_nvmf.so.20.0 00:04:02.028 LIB libspdk_iscsi.a 00:04:02.028 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:02.028 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:02.028 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:02.028 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:02.028 SO libspdk_iscsi.so.8.0 00:04:02.028 LIB libspdk_vhost.a 00:04:02.287 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:02.287 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:02.287 SO libspdk_vhost.so.8.0 00:04:02.287 SYMLINK libspdk_nvmf.so 00:04:02.287 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:02.287 SYMLINK libspdk_iscsi.so 00:04:02.287 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:02.287 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:02.287 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:02.287 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:02.287 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:02.287 SYMLINK libspdk_vhost.so 00:04:02.287 CC lib/ftl/base/ftl_base_dev.o 00:04:02.287 CC lib/ftl/base/ftl_base_bdev.o 00:04:02.287 CC lib/ftl/ftl_trace.o 00:04:02.853 LIB libspdk_ftl.a 00:04:02.853 SO libspdk_ftl.so.9.0 00:04:03.111 SYMLINK libspdk_ftl.so 00:04:03.677 CC module/vfu_device/vfu_virtio.o 00:04:03.677 CC module/env_dpdk/env_dpdk_rpc.o 00:04:03.677 CC module/accel/dsa/accel_dsa.o 00:04:03.677 CC module/sock/posix/posix.o 00:04:03.677 CC module/fsdev/aio/fsdev_aio.o 00:04:03.677 CC module/blob/bdev/blob_bdev.o 00:04:03.677 CC module/accel/error/accel_error.o 00:04:03.677 CC module/keyring/file/keyring.o 00:04:03.677 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:03.677 CC module/accel/ioat/accel_ioat.o 00:04:03.677 LIB libspdk_env_dpdk_rpc.a 00:04:03.677 SO libspdk_env_dpdk_rpc.so.6.0 00:04:03.677 SYMLINK libspdk_env_dpdk_rpc.so 00:04:03.677 CC module/accel/error/accel_error_rpc.o 00:04:03.677 CC module/keyring/file/keyring_rpc.o 00:04:03.936 CC module/accel/ioat/accel_ioat_rpc.o 00:04:03.936 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:03.936 LIB libspdk_scheduler_dynamic.a 00:04:03.936 SO libspdk_scheduler_dynamic.so.4.0 00:04:03.936 LIB libspdk_accel_error.a 00:04:03.936 CC module/accel/dsa/accel_dsa_rpc.o 00:04:03.936 LIB libspdk_keyring_file.a 00:04:03.936 SYMLINK libspdk_scheduler_dynamic.so 00:04:03.936 SO libspdk_accel_error.so.2.0 00:04:03.936 LIB libspdk_blob_bdev.a 00:04:03.936 SO libspdk_keyring_file.so.2.0 00:04:03.936 SO libspdk_blob_bdev.so.12.0 00:04:03.936 SYMLINK libspdk_accel_error.so 00:04:03.936 LIB libspdk_accel_ioat.a 00:04:03.936 CC module/vfu_device/vfu_virtio_blk.o 00:04:03.936 SYMLINK libspdk_keyring_file.so 00:04:03.936 SO libspdk_accel_ioat.so.6.0 00:04:03.936 SYMLINK libspdk_blob_bdev.so 00:04:04.195 LIB libspdk_accel_dsa.a 00:04:04.195 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:04.195 SO libspdk_accel_dsa.so.5.0 00:04:04.195 SYMLINK libspdk_accel_ioat.so 00:04:04.195 CC module/vfu_device/vfu_virtio_scsi.o 00:04:04.195 SYMLINK libspdk_accel_dsa.so 00:04:04.195 CC module/vfu_device/vfu_virtio_rpc.o 00:04:04.195 CC module/scheduler/gscheduler/gscheduler.o 00:04:04.195 CC module/accel/iaa/accel_iaa.o 00:04:04.195 CC module/keyring/linux/keyring.o 00:04:04.195 LIB libspdk_scheduler_dpdk_governor.a 00:04:04.195 CC module/vfu_device/vfu_virtio_fs.o 00:04:04.195 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:04.454 CC module/accel/iaa/accel_iaa_rpc.o 00:04:04.454 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:04.454 LIB libspdk_scheduler_gscheduler.a 00:04:04.454 CC module/keyring/linux/keyring_rpc.o 00:04:04.454 CC module/fsdev/aio/linux_aio_mgr.o 00:04:04.454 SO libspdk_scheduler_gscheduler.so.4.0 00:04:04.454 SYMLINK libspdk_scheduler_gscheduler.so 00:04:04.454 LIB libspdk_sock_posix.a 00:04:04.454 LIB libspdk_accel_iaa.a 00:04:04.454 SO libspdk_accel_iaa.so.3.0 00:04:04.454 SO libspdk_sock_posix.so.6.0 00:04:04.454 LIB libspdk_keyring_linux.a 00:04:04.454 LIB libspdk_vfu_device.a 00:04:04.713 SO libspdk_keyring_linux.so.1.0 00:04:04.713 SYMLINK libspdk_accel_iaa.so 00:04:04.713 SO libspdk_vfu_device.so.3.0 00:04:04.713 SYMLINK libspdk_sock_posix.so 00:04:04.713 CC module/bdev/delay/vbdev_delay.o 00:04:04.713 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:04.713 LIB libspdk_fsdev_aio.a 00:04:04.713 CC module/bdev/error/vbdev_error.o 00:04:04.713 SYMLINK libspdk_keyring_linux.so 00:04:04.713 CC module/bdev/error/vbdev_error_rpc.o 00:04:04.713 CC module/bdev/gpt/gpt.o 00:04:04.713 CC module/blobfs/bdev/blobfs_bdev.o 00:04:04.713 SO libspdk_fsdev_aio.so.1.0 00:04:04.713 SYMLINK libspdk_vfu_device.so 00:04:04.713 CC module/bdev/gpt/vbdev_gpt.o 00:04:04.713 SYMLINK libspdk_fsdev_aio.so 00:04:04.713 CC module/bdev/malloc/bdev_malloc.o 00:04:04.713 CC module/bdev/lvol/vbdev_lvol.o 00:04:04.713 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:04.971 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:04.971 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:04.971 CC module/bdev/null/bdev_null.o 00:04:04.971 LIB libspdk_bdev_error.a 00:04:04.971 SO libspdk_bdev_error.so.6.0 00:04:04.971 CC module/bdev/nvme/bdev_nvme.o 00:04:04.971 LIB libspdk_bdev_gpt.a 00:04:04.971 LIB libspdk_blobfs_bdev.a 00:04:04.971 SYMLINK libspdk_bdev_error.so 00:04:04.971 SO libspdk_bdev_gpt.so.6.0 00:04:05.230 LIB libspdk_bdev_delay.a 00:04:05.230 SO libspdk_blobfs_bdev.so.6.0 00:04:05.230 SO libspdk_bdev_delay.so.6.0 00:04:05.230 SYMLINK libspdk_blobfs_bdev.so 00:04:05.230 SYMLINK libspdk_bdev_gpt.so 00:04:05.230 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:05.230 SYMLINK libspdk_bdev_delay.so 00:04:05.230 CC module/bdev/nvme/nvme_rpc.o 00:04:05.230 CC module/bdev/nvme/bdev_mdns_client.o 00:04:05.230 LIB libspdk_bdev_malloc.a 00:04:05.230 CC module/bdev/passthru/vbdev_passthru.o 00:04:05.230 CC module/bdev/raid/bdev_raid.o 00:04:05.230 CC module/bdev/null/bdev_null_rpc.o 00:04:05.230 SO libspdk_bdev_malloc.so.6.0 00:04:05.230 CC module/bdev/split/vbdev_split.o 00:04:05.489 SYMLINK libspdk_bdev_malloc.so 00:04:05.489 LIB libspdk_bdev_lvol.a 00:04:05.489 CC module/bdev/split/vbdev_split_rpc.o 00:04:05.489 SO libspdk_bdev_lvol.so.6.0 00:04:05.489 CC module/bdev/nvme/vbdev_opal.o 00:04:05.489 LIB libspdk_bdev_null.a 00:04:05.489 SYMLINK libspdk_bdev_lvol.so 00:04:05.489 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:05.489 SO libspdk_bdev_null.so.6.0 00:04:05.489 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:05.748 CC module/bdev/raid/bdev_raid_rpc.o 00:04:05.748 LIB libspdk_bdev_split.a 00:04:05.748 SYMLINK libspdk_bdev_null.so 00:04:05.748 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:05.748 SO libspdk_bdev_split.so.6.0 00:04:05.748 SYMLINK libspdk_bdev_split.so 00:04:05.748 CC module/bdev/raid/bdev_raid_sb.o 00:04:05.748 LIB libspdk_bdev_passthru.a 00:04:06.007 SO libspdk_bdev_passthru.so.6.0 00:04:06.007 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:06.007 CC module/bdev/raid/raid0.o 00:04:06.007 SYMLINK libspdk_bdev_passthru.so 00:04:06.007 CC module/bdev/raid/raid1.o 00:04:06.007 CC module/bdev/aio/bdev_aio.o 00:04:06.007 CC module/bdev/ftl/bdev_ftl.o 00:04:06.007 CC module/bdev/iscsi/bdev_iscsi.o 00:04:06.007 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:06.265 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:06.265 CC module/bdev/raid/concat.o 00:04:06.265 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:06.265 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:06.265 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:06.524 LIB libspdk_bdev_ftl.a 00:04:06.524 CC module/bdev/aio/bdev_aio_rpc.o 00:04:06.524 SO libspdk_bdev_ftl.so.6.0 00:04:06.524 LIB libspdk_bdev_zone_block.a 00:04:06.524 SO libspdk_bdev_zone_block.so.6.0 00:04:06.524 SYMLINK libspdk_bdev_ftl.so 00:04:06.524 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:06.524 LIB libspdk_bdev_aio.a 00:04:06.524 SYMLINK libspdk_bdev_zone_block.so 00:04:06.524 SO libspdk_bdev_aio.so.6.0 00:04:06.524 LIB libspdk_bdev_raid.a 00:04:06.784 SYMLINK libspdk_bdev_aio.so 00:04:06.784 LIB libspdk_bdev_iscsi.a 00:04:06.784 SO libspdk_bdev_raid.so.6.0 00:04:06.784 SO libspdk_bdev_iscsi.so.6.0 00:04:06.784 LIB libspdk_bdev_virtio.a 00:04:06.784 SO libspdk_bdev_virtio.so.6.0 00:04:06.784 SYMLINK libspdk_bdev_iscsi.so 00:04:06.784 SYMLINK libspdk_bdev_raid.so 00:04:06.784 SYMLINK libspdk_bdev_virtio.so 00:04:08.159 LIB libspdk_bdev_nvme.a 00:04:08.417 SO libspdk_bdev_nvme.so.7.1 00:04:08.417 SYMLINK libspdk_bdev_nvme.so 00:04:08.984 CC module/event/subsystems/vmd/vmd.o 00:04:08.984 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:08.984 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:08.984 CC module/event/subsystems/fsdev/fsdev.o 00:04:08.984 CC module/event/subsystems/iobuf/iobuf.o 00:04:08.984 CC module/event/subsystems/keyring/keyring.o 00:04:08.984 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:08.984 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:08.984 CC module/event/subsystems/scheduler/scheduler.o 00:04:08.984 CC module/event/subsystems/sock/sock.o 00:04:08.984 LIB libspdk_event_iobuf.a 00:04:08.984 LIB libspdk_event_vhost_blk.a 00:04:08.984 LIB libspdk_event_scheduler.a 00:04:08.984 LIB libspdk_event_keyring.a 00:04:09.243 LIB libspdk_event_vfu_tgt.a 00:04:09.243 LIB libspdk_event_fsdev.a 00:04:09.243 LIB libspdk_event_sock.a 00:04:09.243 LIB libspdk_event_vmd.a 00:04:09.243 SO libspdk_event_scheduler.so.4.0 00:04:09.243 SO libspdk_event_iobuf.so.3.0 00:04:09.243 SO libspdk_event_vhost_blk.so.3.0 00:04:09.243 SO libspdk_event_keyring.so.1.0 00:04:09.243 SO libspdk_event_vfu_tgt.so.3.0 00:04:09.243 SO libspdk_event_fsdev.so.1.0 00:04:09.243 SO libspdk_event_sock.so.5.0 00:04:09.243 SO libspdk_event_vmd.so.6.0 00:04:09.243 SYMLINK libspdk_event_scheduler.so 00:04:09.243 SYMLINK libspdk_event_iobuf.so 00:04:09.243 SYMLINK libspdk_event_vhost_blk.so 00:04:09.243 SYMLINK libspdk_event_fsdev.so 00:04:09.243 SYMLINK libspdk_event_sock.so 00:04:09.243 SYMLINK libspdk_event_vfu_tgt.so 00:04:09.243 SYMLINK libspdk_event_vmd.so 00:04:09.243 SYMLINK libspdk_event_keyring.so 00:04:09.502 CC module/event/subsystems/accel/accel.o 00:04:09.502 LIB libspdk_event_accel.a 00:04:09.761 SO libspdk_event_accel.so.6.0 00:04:09.761 SYMLINK libspdk_event_accel.so 00:04:10.019 CC module/event/subsystems/bdev/bdev.o 00:04:10.278 LIB libspdk_event_bdev.a 00:04:10.278 SO libspdk_event_bdev.so.6.0 00:04:10.278 SYMLINK libspdk_event_bdev.so 00:04:10.537 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:10.537 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:10.537 CC module/event/subsystems/nbd/nbd.o 00:04:10.537 CC module/event/subsystems/scsi/scsi.o 00:04:10.537 CC module/event/subsystems/ublk/ublk.o 00:04:10.796 LIB libspdk_event_nbd.a 00:04:10.796 LIB libspdk_event_ublk.a 00:04:10.796 LIB libspdk_event_scsi.a 00:04:10.796 SO libspdk_event_nbd.so.6.0 00:04:10.796 SO libspdk_event_ublk.so.3.0 00:04:10.796 SO libspdk_event_scsi.so.6.0 00:04:10.796 SYMLINK libspdk_event_nbd.so 00:04:10.796 SYMLINK libspdk_event_ublk.so 00:04:10.796 SYMLINK libspdk_event_scsi.so 00:04:10.796 LIB libspdk_event_nvmf.a 00:04:10.796 SO libspdk_event_nvmf.so.6.0 00:04:10.796 SYMLINK libspdk_event_nvmf.so 00:04:11.054 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:11.054 CC module/event/subsystems/iscsi/iscsi.o 00:04:11.314 LIB libspdk_event_vhost_scsi.a 00:04:11.314 LIB libspdk_event_iscsi.a 00:04:11.314 SO libspdk_event_vhost_scsi.so.3.0 00:04:11.314 SO libspdk_event_iscsi.so.6.0 00:04:11.314 SYMLINK libspdk_event_vhost_scsi.so 00:04:11.314 SYMLINK libspdk_event_iscsi.so 00:04:11.573 SO libspdk.so.6.0 00:04:11.573 SYMLINK libspdk.so 00:04:11.831 CC app/spdk_nvme_identify/identify.o 00:04:11.831 CXX app/trace/trace.o 00:04:11.831 CC app/spdk_nvme_perf/perf.o 00:04:11.831 CC app/trace_record/trace_record.o 00:04:11.831 CC app/spdk_lspci/spdk_lspci.o 00:04:11.831 CC app/iscsi_tgt/iscsi_tgt.o 00:04:11.831 CC app/nvmf_tgt/nvmf_main.o 00:04:11.831 CC app/spdk_tgt/spdk_tgt.o 00:04:11.831 CC examples/util/zipf/zipf.o 00:04:11.831 CC test/thread/poller_perf/poller_perf.o 00:04:12.090 LINK spdk_lspci 00:04:12.090 LINK nvmf_tgt 00:04:12.090 LINK zipf 00:04:12.090 LINK iscsi_tgt 00:04:12.090 LINK spdk_trace_record 00:04:12.090 LINK poller_perf 00:04:12.090 LINK spdk_tgt 00:04:12.356 LINK spdk_trace 00:04:12.356 CC app/spdk_nvme_discover/discovery_aer.o 00:04:12.356 CC app/spdk_top/spdk_top.o 00:04:12.356 CC examples/ioat/perf/perf.o 00:04:12.626 CC examples/vmd/lsvmd/lsvmd.o 00:04:12.626 CC test/app/bdev_svc/bdev_svc.o 00:04:12.626 CC test/dma/test_dma/test_dma.o 00:04:12.626 LINK spdk_nvme_discover 00:04:12.626 CC test/app/histogram_perf/histogram_perf.o 00:04:12.626 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:12.626 LINK lsvmd 00:04:12.884 LINK bdev_svc 00:04:12.884 LINK histogram_perf 00:04:12.884 LINK ioat_perf 00:04:12.884 LINK spdk_nvme_identify 00:04:12.884 CC examples/idxd/perf/perf.o 00:04:12.884 CC examples/vmd/led/led.o 00:04:12.884 LINK spdk_nvme_perf 00:04:13.141 CC test/app/jsoncat/jsoncat.o 00:04:13.141 CC examples/ioat/verify/verify.o 00:04:13.141 CC test/app/stub/stub.o 00:04:13.141 LINK led 00:04:13.141 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:13.141 LINK nvme_fuzz 00:04:13.141 LINK test_dma 00:04:13.141 LINK jsoncat 00:04:13.397 LINK verify 00:04:13.397 CC app/spdk_dd/spdk_dd.o 00:04:13.397 LINK stub 00:04:13.397 LINK idxd_perf 00:04:13.397 TEST_HEADER include/spdk/accel.h 00:04:13.397 TEST_HEADER include/spdk/accel_module.h 00:04:13.397 TEST_HEADER include/spdk/assert.h 00:04:13.397 TEST_HEADER include/spdk/barrier.h 00:04:13.397 TEST_HEADER include/spdk/base64.h 00:04:13.397 TEST_HEADER include/spdk/bdev.h 00:04:13.397 TEST_HEADER include/spdk/bdev_module.h 00:04:13.397 TEST_HEADER include/spdk/bdev_zone.h 00:04:13.397 TEST_HEADER include/spdk/bit_array.h 00:04:13.397 TEST_HEADER include/spdk/bit_pool.h 00:04:13.397 TEST_HEADER include/spdk/blob_bdev.h 00:04:13.397 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:13.397 TEST_HEADER include/spdk/blobfs.h 00:04:13.397 TEST_HEADER include/spdk/blob.h 00:04:13.397 TEST_HEADER include/spdk/conf.h 00:04:13.397 TEST_HEADER include/spdk/config.h 00:04:13.397 TEST_HEADER include/spdk/cpuset.h 00:04:13.397 TEST_HEADER include/spdk/crc16.h 00:04:13.397 TEST_HEADER include/spdk/crc32.h 00:04:13.397 TEST_HEADER include/spdk/crc64.h 00:04:13.397 TEST_HEADER include/spdk/dif.h 00:04:13.397 TEST_HEADER include/spdk/dma.h 00:04:13.397 TEST_HEADER include/spdk/endian.h 00:04:13.397 TEST_HEADER include/spdk/env_dpdk.h 00:04:13.397 TEST_HEADER include/spdk/env.h 00:04:13.397 TEST_HEADER include/spdk/event.h 00:04:13.397 CC app/fio/nvme/fio_plugin.o 00:04:13.397 TEST_HEADER include/spdk/fd_group.h 00:04:13.397 TEST_HEADER include/spdk/fd.h 00:04:13.397 TEST_HEADER include/spdk/file.h 00:04:13.397 TEST_HEADER include/spdk/fsdev.h 00:04:13.397 TEST_HEADER include/spdk/fsdev_module.h 00:04:13.397 TEST_HEADER include/spdk/ftl.h 00:04:13.397 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:13.397 TEST_HEADER include/spdk/gpt_spec.h 00:04:13.397 TEST_HEADER include/spdk/hexlify.h 00:04:13.397 TEST_HEADER include/spdk/histogram_data.h 00:04:13.397 TEST_HEADER include/spdk/idxd.h 00:04:13.397 TEST_HEADER include/spdk/idxd_spec.h 00:04:13.397 TEST_HEADER include/spdk/init.h 00:04:13.397 TEST_HEADER include/spdk/ioat.h 00:04:13.397 TEST_HEADER include/spdk/ioat_spec.h 00:04:13.397 TEST_HEADER include/spdk/iscsi_spec.h 00:04:13.397 TEST_HEADER include/spdk/json.h 00:04:13.397 TEST_HEADER include/spdk/jsonrpc.h 00:04:13.397 TEST_HEADER include/spdk/keyring.h 00:04:13.654 TEST_HEADER include/spdk/keyring_module.h 00:04:13.654 TEST_HEADER include/spdk/likely.h 00:04:13.654 TEST_HEADER include/spdk/log.h 00:04:13.654 TEST_HEADER include/spdk/lvol.h 00:04:13.654 TEST_HEADER include/spdk/md5.h 00:04:13.654 TEST_HEADER include/spdk/memory.h 00:04:13.654 TEST_HEADER include/spdk/mmio.h 00:04:13.654 TEST_HEADER include/spdk/nbd.h 00:04:13.654 TEST_HEADER include/spdk/net.h 00:04:13.654 TEST_HEADER include/spdk/notify.h 00:04:13.654 TEST_HEADER include/spdk/nvme.h 00:04:13.654 TEST_HEADER include/spdk/nvme_intel.h 00:04:13.654 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:13.654 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:13.654 TEST_HEADER include/spdk/nvme_spec.h 00:04:13.654 TEST_HEADER include/spdk/nvme_zns.h 00:04:13.654 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:13.654 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:13.654 TEST_HEADER include/spdk/nvmf.h 00:04:13.654 TEST_HEADER include/spdk/nvmf_spec.h 00:04:13.654 TEST_HEADER include/spdk/nvmf_transport.h 00:04:13.654 TEST_HEADER include/spdk/opal.h 00:04:13.654 TEST_HEADER include/spdk/opal_spec.h 00:04:13.654 TEST_HEADER include/spdk/pci_ids.h 00:04:13.654 TEST_HEADER include/spdk/pipe.h 00:04:13.654 CC test/event/event_perf/event_perf.o 00:04:13.654 TEST_HEADER include/spdk/queue.h 00:04:13.654 TEST_HEADER include/spdk/reduce.h 00:04:13.654 TEST_HEADER include/spdk/rpc.h 00:04:13.654 TEST_HEADER include/spdk/scheduler.h 00:04:13.654 TEST_HEADER include/spdk/scsi.h 00:04:13.654 TEST_HEADER include/spdk/scsi_spec.h 00:04:13.654 LINK spdk_top 00:04:13.654 TEST_HEADER include/spdk/sock.h 00:04:13.654 CC app/vhost/vhost.o 00:04:13.654 TEST_HEADER include/spdk/stdinc.h 00:04:13.654 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:13.654 TEST_HEADER include/spdk/string.h 00:04:13.654 CC test/env/mem_callbacks/mem_callbacks.o 00:04:13.654 TEST_HEADER include/spdk/thread.h 00:04:13.654 TEST_HEADER include/spdk/trace.h 00:04:13.655 TEST_HEADER include/spdk/trace_parser.h 00:04:13.655 TEST_HEADER include/spdk/tree.h 00:04:13.655 TEST_HEADER include/spdk/ublk.h 00:04:13.655 TEST_HEADER include/spdk/util.h 00:04:13.655 TEST_HEADER include/spdk/uuid.h 00:04:13.655 TEST_HEADER include/spdk/version.h 00:04:13.655 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:13.655 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:13.655 TEST_HEADER include/spdk/vhost.h 00:04:13.655 TEST_HEADER include/spdk/vmd.h 00:04:13.655 TEST_HEADER include/spdk/xor.h 00:04:13.655 TEST_HEADER include/spdk/zipf.h 00:04:13.655 CXX test/cpp_headers/accel.o 00:04:13.655 LINK event_perf 00:04:13.655 LINK spdk_dd 00:04:13.913 CXX test/cpp_headers/accel_module.o 00:04:13.913 LINK vhost 00:04:13.913 LINK interrupt_tgt 00:04:13.913 CXX test/cpp_headers/assert.o 00:04:13.913 CC test/event/reactor/reactor.o 00:04:14.171 CC test/event/reactor_perf/reactor_perf.o 00:04:14.171 CC test/event/app_repeat/app_repeat.o 00:04:14.171 CXX test/cpp_headers/barrier.o 00:04:14.171 CXX test/cpp_headers/base64.o 00:04:14.171 LINK reactor 00:04:14.171 CC test/env/vtophys/vtophys.o 00:04:14.171 LINK reactor_perf 00:04:14.171 LINK spdk_nvme 00:04:14.171 LINK mem_callbacks 00:04:14.171 LINK app_repeat 00:04:14.171 CXX test/cpp_headers/bdev.o 00:04:14.477 LINK vtophys 00:04:14.477 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:14.477 CC test/event/scheduler/scheduler.o 00:04:14.478 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:14.478 CC app/fio/bdev/fio_plugin.o 00:04:14.478 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:14.478 CXX test/cpp_headers/bdev_module.o 00:04:14.478 CXX test/cpp_headers/bdev_zone.o 00:04:14.478 CC test/env/memory/memory_ut.o 00:04:14.478 CC examples/thread/thread/thread_ex.o 00:04:14.736 LINK env_dpdk_post_init 00:04:14.736 LINK scheduler 00:04:14.736 CXX test/cpp_headers/bit_array.o 00:04:14.736 CC test/env/pci/pci_ut.o 00:04:14.997 LINK thread 00:04:14.997 CXX test/cpp_headers/bit_pool.o 00:04:14.997 LINK vhost_fuzz 00:04:15.255 CC examples/sock/hello_world/hello_sock.o 00:04:15.255 CC test/nvme/aer/aer.o 00:04:15.255 LINK spdk_bdev 00:04:15.255 CXX test/cpp_headers/blob_bdev.o 00:04:15.255 CC test/nvme/reset/reset.o 00:04:15.255 CC test/rpc_client/rpc_client_test.o 00:04:15.255 CXX test/cpp_headers/blobfs_bdev.o 00:04:15.512 LINK pci_ut 00:04:15.512 LINK iscsi_fuzz 00:04:15.512 LINK hello_sock 00:04:15.512 LINK rpc_client_test 00:04:15.512 CXX test/cpp_headers/blobfs.o 00:04:15.512 LINK aer 00:04:15.770 CC examples/accel/perf/accel_perf.o 00:04:15.770 LINK reset 00:04:15.770 CC test/nvme/sgl/sgl.o 00:04:15.770 CXX test/cpp_headers/blob.o 00:04:15.770 CXX test/cpp_headers/conf.o 00:04:15.770 CC test/nvme/e2edp/nvme_dp.o 00:04:16.028 CC examples/nvme/hello_world/hello_world.o 00:04:16.028 CC examples/blob/hello_world/hello_blob.o 00:04:16.028 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:16.028 CXX test/cpp_headers/config.o 00:04:16.028 LINK sgl 00:04:16.028 LINK memory_ut 00:04:16.028 CXX test/cpp_headers/cpuset.o 00:04:16.286 LINK nvme_dp 00:04:16.286 CXX test/cpp_headers/crc16.o 00:04:16.286 CC test/accel/dif/dif.o 00:04:16.286 LINK hello_world 00:04:16.286 LINK accel_perf 00:04:16.286 LINK hello_fsdev 00:04:16.286 LINK hello_blob 00:04:16.286 CXX test/cpp_headers/crc32.o 00:04:16.545 CXX test/cpp_headers/crc64.o 00:04:16.545 CC test/nvme/overhead/overhead.o 00:04:16.545 CC examples/nvme/reconnect/reconnect.o 00:04:16.545 CC test/blobfs/mkfs/mkfs.o 00:04:16.803 CC test/nvme/err_injection/err_injection.o 00:04:16.803 CC examples/blob/cli/blobcli.o 00:04:16.803 CXX test/cpp_headers/dif.o 00:04:16.803 CC examples/bdev/hello_world/hello_bdev.o 00:04:16.803 CC test/lvol/esnap/esnap.o 00:04:17.060 LINK err_injection 00:04:17.060 LINK overhead 00:04:17.060 LINK mkfs 00:04:17.060 CXX test/cpp_headers/dma.o 00:04:17.060 LINK reconnect 00:04:17.060 LINK hello_bdev 00:04:17.318 CXX test/cpp_headers/endian.o 00:04:17.318 LINK dif 00:04:17.318 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:17.318 CC test/nvme/startup/startup.o 00:04:17.318 CXX test/cpp_headers/env_dpdk.o 00:04:17.318 CC examples/bdev/bdevperf/bdevperf.o 00:04:17.576 LINK blobcli 00:04:17.576 CC examples/nvme/arbitration/arbitration.o 00:04:17.576 CXX test/cpp_headers/env.o 00:04:17.576 LINK startup 00:04:17.576 CC examples/nvme/hotplug/hotplug.o 00:04:17.576 CC test/nvme/reserve/reserve.o 00:04:17.834 CXX test/cpp_headers/event.o 00:04:17.834 CXX test/cpp_headers/fd_group.o 00:04:18.091 CC test/nvme/simple_copy/simple_copy.o 00:04:18.091 LINK reserve 00:04:18.091 LINK hotplug 00:04:18.091 CXX test/cpp_headers/fd.o 00:04:18.091 LINK arbitration 00:04:18.348 LINK nvme_manage 00:04:18.348 CC test/nvme/connect_stress/connect_stress.o 00:04:18.348 CXX test/cpp_headers/file.o 00:04:18.348 CXX test/cpp_headers/fsdev.o 00:04:18.348 CC test/nvme/boot_partition/boot_partition.o 00:04:18.348 LINK simple_copy 00:04:18.606 LINK connect_stress 00:04:18.606 CC test/nvme/compliance/nvme_compliance.o 00:04:18.606 CXX test/cpp_headers/fsdev_module.o 00:04:18.606 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:18.606 LINK boot_partition 00:04:18.606 CC examples/nvme/abort/abort.o 00:04:18.865 LINK bdevperf 00:04:18.865 CXX test/cpp_headers/ftl.o 00:04:18.865 CC test/bdev/bdevio/bdevio.o 00:04:18.865 LINK cmb_copy 00:04:18.865 CC test/nvme/fused_ordering/fused_ordering.o 00:04:18.865 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:19.123 LINK nvme_compliance 00:04:19.123 CXX test/cpp_headers/fuse_dispatcher.o 00:04:19.123 CC test/nvme/fdp/fdp.o 00:04:19.123 LINK fused_ordering 00:04:19.123 LINK abort 00:04:19.123 CC test/nvme/cuse/cuse.o 00:04:19.123 LINK doorbell_aers 00:04:19.380 CXX test/cpp_headers/gpt_spec.o 00:04:19.380 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:19.380 CXX test/cpp_headers/hexlify.o 00:04:19.380 LINK bdevio 00:04:19.380 CXX test/cpp_headers/histogram_data.o 00:04:19.380 CXX test/cpp_headers/idxd.o 00:04:19.380 CXX test/cpp_headers/idxd_spec.o 00:04:19.639 CXX test/cpp_headers/init.o 00:04:19.639 CXX test/cpp_headers/ioat.o 00:04:19.639 LINK pmr_persistence 00:04:19.639 CXX test/cpp_headers/ioat_spec.o 00:04:19.639 LINK fdp 00:04:19.639 CXX test/cpp_headers/iscsi_spec.o 00:04:19.639 CXX test/cpp_headers/json.o 00:04:19.639 CXX test/cpp_headers/jsonrpc.o 00:04:19.639 CXX test/cpp_headers/keyring.o 00:04:19.639 CXX test/cpp_headers/keyring_module.o 00:04:19.639 CXX test/cpp_headers/likely.o 00:04:19.896 CXX test/cpp_headers/log.o 00:04:19.896 CXX test/cpp_headers/lvol.o 00:04:19.896 CXX test/cpp_headers/md5.o 00:04:19.896 CXX test/cpp_headers/memory.o 00:04:19.896 CXX test/cpp_headers/mmio.o 00:04:19.896 CXX test/cpp_headers/nbd.o 00:04:19.896 CC examples/nvmf/nvmf/nvmf.o 00:04:19.896 CXX test/cpp_headers/net.o 00:04:19.896 CXX test/cpp_headers/notify.o 00:04:19.896 CXX test/cpp_headers/nvme.o 00:04:20.154 CXX test/cpp_headers/nvme_intel.o 00:04:20.154 CXX test/cpp_headers/nvme_ocssd.o 00:04:20.154 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:20.154 CXX test/cpp_headers/nvme_spec.o 00:04:20.154 CXX test/cpp_headers/nvme_zns.o 00:04:20.154 CXX test/cpp_headers/nvmf_cmd.o 00:04:20.154 LINK nvmf 00:04:20.154 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:20.413 CXX test/cpp_headers/nvmf.o 00:04:20.413 CXX test/cpp_headers/nvmf_spec.o 00:04:20.413 CXX test/cpp_headers/nvmf_transport.o 00:04:20.413 CXX test/cpp_headers/opal.o 00:04:20.413 CXX test/cpp_headers/opal_spec.o 00:04:20.413 CXX test/cpp_headers/pci_ids.o 00:04:20.413 CXX test/cpp_headers/pipe.o 00:04:20.671 CXX test/cpp_headers/queue.o 00:04:20.671 CXX test/cpp_headers/reduce.o 00:04:20.671 CXX test/cpp_headers/rpc.o 00:04:20.671 CXX test/cpp_headers/scheduler.o 00:04:20.671 CXX test/cpp_headers/scsi.o 00:04:20.671 CXX test/cpp_headers/scsi_spec.o 00:04:20.671 CXX test/cpp_headers/sock.o 00:04:20.671 CXX test/cpp_headers/stdinc.o 00:04:20.671 CXX test/cpp_headers/string.o 00:04:20.929 CXX test/cpp_headers/thread.o 00:04:20.929 CXX test/cpp_headers/trace.o 00:04:20.929 CXX test/cpp_headers/trace_parser.o 00:04:20.929 CXX test/cpp_headers/tree.o 00:04:20.929 CXX test/cpp_headers/ublk.o 00:04:20.929 CXX test/cpp_headers/util.o 00:04:20.929 CXX test/cpp_headers/uuid.o 00:04:20.929 LINK cuse 00:04:20.929 CXX test/cpp_headers/version.o 00:04:20.929 CXX test/cpp_headers/vfio_user_pci.o 00:04:20.929 CXX test/cpp_headers/vfio_user_spec.o 00:04:20.929 CXX test/cpp_headers/vhost.o 00:04:20.929 CXX test/cpp_headers/vmd.o 00:04:20.929 CXX test/cpp_headers/xor.o 00:04:21.186 CXX test/cpp_headers/zipf.o 00:04:23.714 LINK esnap 00:04:23.973 00:04:23.973 real 1m34.897s 00:04:23.973 user 8m56.150s 00:04:23.973 sys 1m50.973s 00:04:23.973 09:36:11 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:23.973 09:36:11 make -- common/autotest_common.sh@10 -- $ set +x 00:04:23.973 ************************************ 00:04:23.973 END TEST make 00:04:23.973 ************************************ 00:04:24.233 09:36:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:24.233 09:36:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:24.233 09:36:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:24.233 09:36:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.233 09:36:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:24.233 09:36:11 -- pm/common@44 -- $ pid=5309 00:04:24.233 09:36:11 -- pm/common@50 -- $ kill -TERM 5309 00:04:24.233 09:36:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.233 09:36:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:24.233 09:36:11 -- pm/common@44 -- $ pid=5311 00:04:24.233 09:36:11 -- pm/common@50 -- $ kill -TERM 5311 00:04:24.233 09:36:11 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:24.233 09:36:11 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:24.233 09:36:11 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:24.233 09:36:11 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:24.233 09:36:11 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:24.233 09:36:11 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:24.233 09:36:11 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.233 09:36:11 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.233 09:36:11 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.233 09:36:11 -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.233 09:36:11 -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.233 09:36:11 -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.233 09:36:11 -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.233 09:36:11 -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.233 09:36:11 -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.233 09:36:11 -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.233 09:36:11 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.233 09:36:11 -- scripts/common.sh@344 -- # case "$op" in 00:04:24.233 09:36:11 -- scripts/common.sh@345 -- # : 1 00:04:24.233 09:36:11 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.233 09:36:11 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.233 09:36:11 -- scripts/common.sh@365 -- # decimal 1 00:04:24.233 09:36:11 -- scripts/common.sh@353 -- # local d=1 00:04:24.233 09:36:11 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.233 09:36:11 -- scripts/common.sh@355 -- # echo 1 00:04:24.233 09:36:11 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.233 09:36:11 -- scripts/common.sh@366 -- # decimal 2 00:04:24.233 09:36:11 -- scripts/common.sh@353 -- # local d=2 00:04:24.233 09:36:11 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.233 09:36:11 -- scripts/common.sh@355 -- # echo 2 00:04:24.233 09:36:11 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.233 09:36:11 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.233 09:36:11 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.233 09:36:11 -- scripts/common.sh@368 -- # return 0 00:04:24.233 09:36:11 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.233 09:36:11 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:24.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.233 --rc genhtml_branch_coverage=1 00:04:24.233 --rc genhtml_function_coverage=1 00:04:24.233 --rc genhtml_legend=1 00:04:24.233 --rc geninfo_all_blocks=1 00:04:24.233 --rc geninfo_unexecuted_blocks=1 00:04:24.233 00:04:24.233 ' 00:04:24.233 09:36:11 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:24.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.233 --rc genhtml_branch_coverage=1 00:04:24.233 --rc genhtml_function_coverage=1 00:04:24.233 --rc genhtml_legend=1 00:04:24.233 --rc geninfo_all_blocks=1 00:04:24.233 --rc geninfo_unexecuted_blocks=1 00:04:24.233 00:04:24.233 ' 00:04:24.233 09:36:11 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:24.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.233 --rc genhtml_branch_coverage=1 00:04:24.233 --rc genhtml_function_coverage=1 00:04:24.233 --rc genhtml_legend=1 00:04:24.233 --rc geninfo_all_blocks=1 00:04:24.233 --rc geninfo_unexecuted_blocks=1 00:04:24.233 00:04:24.233 ' 00:04:24.233 09:36:11 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:24.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.233 --rc genhtml_branch_coverage=1 00:04:24.233 --rc genhtml_function_coverage=1 00:04:24.233 --rc genhtml_legend=1 00:04:24.233 --rc geninfo_all_blocks=1 00:04:24.233 --rc geninfo_unexecuted_blocks=1 00:04:24.233 00:04:24.233 ' 00:04:24.233 09:36:11 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:24.233 09:36:11 -- nvmf/common.sh@7 -- # uname -s 00:04:24.233 09:36:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.233 09:36:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.233 09:36:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.233 09:36:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.233 09:36:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.233 09:36:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.233 09:36:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.233 09:36:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.233 09:36:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.233 09:36:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.233 09:36:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:04:24.233 09:36:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:04:24.233 09:36:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.233 09:36:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.233 09:36:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:24.233 09:36:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.233 09:36:11 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:24.233 09:36:11 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:24.233 09:36:11 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.233 09:36:11 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.233 09:36:11 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.233 09:36:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.233 09:36:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.233 09:36:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.233 09:36:11 -- paths/export.sh@5 -- # export PATH 00:04:24.233 09:36:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.233 09:36:11 -- nvmf/common.sh@51 -- # : 0 00:04:24.233 09:36:11 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:24.233 09:36:11 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:24.233 09:36:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.233 09:36:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.233 09:36:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.233 09:36:11 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:24.233 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:24.233 09:36:11 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:24.233 09:36:11 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:24.233 09:36:11 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:24.233 09:36:11 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:24.233 09:36:11 -- spdk/autotest.sh@32 -- # uname -s 00:04:24.233 09:36:11 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:24.233 09:36:11 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:24.233 09:36:11 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:24.493 09:36:11 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:24.493 09:36:11 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:24.493 09:36:11 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:24.493 09:36:11 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:24.493 09:36:11 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:24.493 09:36:11 -- spdk/autotest.sh@48 -- # udevadm_pid=56709 00:04:24.493 09:36:11 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:24.493 09:36:11 -- pm/common@17 -- # local monitor 00:04:24.493 09:36:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.493 09:36:11 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:24.493 09:36:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.493 09:36:11 -- pm/common@25 -- # sleep 1 00:04:24.493 09:36:11 -- pm/common@21 -- # date +%s 00:04:24.493 09:36:11 -- pm/common@21 -- # date +%s 00:04:24.493 09:36:11 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733391371 00:04:24.493 09:36:11 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733391371 00:04:24.493 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733391371_collect-cpu-load.pm.log 00:04:24.493 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733391371_collect-vmstat.pm.log 00:04:25.430 09:36:12 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:25.430 09:36:12 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:25.430 09:36:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.430 09:36:12 -- common/autotest_common.sh@10 -- # set +x 00:04:25.430 09:36:12 -- spdk/autotest.sh@59 -- # create_test_list 00:04:25.430 09:36:12 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:25.430 09:36:12 -- common/autotest_common.sh@10 -- # set +x 00:04:25.430 09:36:12 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:25.430 09:36:12 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:25.430 09:36:12 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:25.430 09:36:12 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:25.430 09:36:12 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:25.430 09:36:12 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:25.430 09:36:12 -- common/autotest_common.sh@1457 -- # uname 00:04:25.430 09:36:12 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:25.430 09:36:12 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:25.430 09:36:12 -- common/autotest_common.sh@1477 -- # uname 00:04:25.430 09:36:12 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:25.430 09:36:12 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:25.430 09:36:12 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:25.430 lcov: LCOV version 1.15 00:04:25.430 09:36:13 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:40.338 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:40.338 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:55.218 09:36:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:55.218 09:36:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:55.218 09:36:40 -- common/autotest_common.sh@10 -- # set +x 00:04:55.218 09:36:40 -- spdk/autotest.sh@78 -- # rm -f 00:04:55.218 09:36:40 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:55.218 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.218 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:55.218 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:55.218 09:36:41 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:55.218 09:36:41 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:55.218 09:36:41 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:55.218 09:36:41 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:55.218 09:36:41 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:55.218 09:36:41 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:55.218 09:36:41 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:55.218 09:36:41 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:55.218 09:36:41 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:55.218 09:36:41 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:55.218 09:36:41 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:55.218 09:36:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:55.218 09:36:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:55.218 09:36:41 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:55.218 09:36:41 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:55.218 09:36:41 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:55.218 09:36:41 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:55.218 09:36:41 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:55.218 09:36:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:55.218 09:36:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:55.218 09:36:41 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:55.218 09:36:41 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:55.218 09:36:41 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:55.218 09:36:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:55.218 09:36:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:55.218 09:36:41 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:55.218 09:36:41 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:04:55.219 09:36:41 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:55.219 09:36:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:55.219 09:36:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:55.219 09:36:41 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:55.219 09:36:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:55.219 09:36:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:55.219 09:36:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:55.219 09:36:41 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:55.219 09:36:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:55.219 No valid GPT data, bailing 00:04:55.219 09:36:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:55.219 09:36:41 -- scripts/common.sh@394 -- # pt= 00:04:55.219 09:36:41 -- scripts/common.sh@395 -- # return 1 00:04:55.219 09:36:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:55.219 1+0 records in 00:04:55.219 1+0 records out 00:04:55.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00433404 s, 242 MB/s 00:04:55.219 09:36:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:55.219 09:36:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:55.219 09:36:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:55.219 09:36:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:55.219 09:36:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:55.219 No valid GPT data, bailing 00:04:55.219 09:36:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:55.219 09:36:41 -- scripts/common.sh@394 -- # pt= 00:04:55.219 09:36:41 -- scripts/common.sh@395 -- # return 1 00:04:55.219 09:36:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:55.219 1+0 records in 00:04:55.219 1+0 records out 00:04:55.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00418744 s, 250 MB/s 00:04:55.219 09:36:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:55.219 09:36:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:55.219 09:36:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:55.219 09:36:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:55.219 09:36:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:55.219 No valid GPT data, bailing 00:04:55.219 09:36:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:55.219 09:36:41 -- scripts/common.sh@394 -- # pt= 00:04:55.219 09:36:41 -- scripts/common.sh@395 -- # return 1 00:04:55.219 09:36:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:55.219 1+0 records in 00:04:55.219 1+0 records out 00:04:55.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502854 s, 209 MB/s 00:04:55.219 09:36:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:55.219 09:36:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:55.219 09:36:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:55.219 09:36:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:55.219 09:36:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:55.219 No valid GPT data, bailing 00:04:55.219 09:36:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:55.219 09:36:41 -- scripts/common.sh@394 -- # pt= 00:04:55.219 09:36:41 -- scripts/common.sh@395 -- # return 1 00:04:55.219 09:36:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:55.219 1+0 records in 00:04:55.219 1+0 records out 00:04:55.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00473723 s, 221 MB/s 00:04:55.219 09:36:41 -- spdk/autotest.sh@105 -- # sync 00:04:55.219 09:36:42 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:55.219 09:36:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:55.219 09:36:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:56.595 09:36:43 -- spdk/autotest.sh@111 -- # uname -s 00:04:56.595 09:36:43 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:56.595 09:36:43 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:56.595 09:36:43 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:57.161 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.161 Hugepages 00:04:57.161 node hugesize free / total 00:04:57.161 node0 1048576kB 0 / 0 00:04:57.161 node0 2048kB 0 / 0 00:04:57.161 00:04:57.161 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:57.161 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:57.161 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:57.420 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:57.420 09:36:44 -- spdk/autotest.sh@117 -- # uname -s 00:04:57.420 09:36:44 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:57.420 09:36:44 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:57.420 09:36:44 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:57.985 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.985 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.243 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.243 09:36:45 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:59.178 09:36:46 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:59.178 09:36:46 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:59.178 09:36:46 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:59.178 09:36:46 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:59.178 09:36:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:59.178 09:36:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:59.178 09:36:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:59.178 09:36:46 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:59.178 09:36:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:59.178 09:36:46 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:59.178 09:36:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:59.178 09:36:46 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:59.436 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:59.695 Waiting for block devices as requested 00:04:59.695 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:59.695 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:59.695 09:36:47 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:59.695 09:36:47 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:59.695 09:36:47 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:59.695 09:36:47 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:59.954 09:36:47 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:59.954 09:36:47 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:59.954 09:36:47 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:59.954 09:36:47 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:59.954 09:36:47 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:59.954 09:36:47 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:59.954 09:36:47 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:59.954 09:36:47 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:59.954 09:36:47 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:59.954 09:36:47 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:59.954 09:36:47 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:59.954 09:36:47 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:59.954 09:36:47 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:59.954 09:36:47 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:59.954 09:36:47 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:59.954 09:36:47 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:59.954 09:36:47 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:59.954 09:36:47 -- common/autotest_common.sh@1543 -- # continue 00:04:59.954 09:36:47 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:59.954 09:36:47 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:59.954 09:36:47 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:59.954 09:36:47 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:59.954 09:36:47 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:59.954 09:36:47 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:59.954 09:36:47 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:59.954 09:36:47 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:59.954 09:36:47 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:59.954 09:36:47 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:59.954 09:36:47 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:59.954 09:36:47 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:59.954 09:36:47 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:59.954 09:36:47 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:59.954 09:36:47 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:59.954 09:36:47 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:59.954 09:36:47 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:59.954 09:36:47 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:59.954 09:36:47 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:59.954 09:36:47 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:59.954 09:36:47 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:59.954 09:36:47 -- common/autotest_common.sh@1543 -- # continue 00:04:59.954 09:36:47 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:59.954 09:36:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:59.954 09:36:47 -- common/autotest_common.sh@10 -- # set +x 00:04:59.954 09:36:47 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:59.954 09:36:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:59.954 09:36:47 -- common/autotest_common.sh@10 -- # set +x 00:04:59.954 09:36:47 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.522 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.781 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:00.781 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:00.781 09:36:48 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:00.781 09:36:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:00.781 09:36:48 -- common/autotest_common.sh@10 -- # set +x 00:05:00.781 09:36:48 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:00.781 09:36:48 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:00.781 09:36:48 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:00.781 09:36:48 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:00.781 09:36:48 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:00.781 09:36:48 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:00.781 09:36:48 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:00.781 09:36:48 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:00.781 09:36:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:00.781 09:36:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:00.781 09:36:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:00.781 09:36:48 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:00.781 09:36:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:00.781 09:36:48 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:00.781 09:36:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:00.781 09:36:48 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:00.781 09:36:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:00.781 09:36:48 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:00.781 09:36:48 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:00.781 09:36:48 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:00.781 09:36:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:00.781 09:36:48 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:00.781 09:36:48 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:00.781 09:36:48 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:00.781 09:36:48 -- common/autotest_common.sh@1572 -- # return 0 00:05:00.781 09:36:48 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:00.782 09:36:48 -- common/autotest_common.sh@1580 -- # return 0 00:05:00.782 09:36:48 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:00.782 09:36:48 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:00.782 09:36:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:00.782 09:36:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:00.782 09:36:48 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:00.782 09:36:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.782 09:36:48 -- common/autotest_common.sh@10 -- # set +x 00:05:00.782 09:36:48 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:00.782 09:36:48 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:00.782 09:36:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.782 09:36:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.782 09:36:48 -- common/autotest_common.sh@10 -- # set +x 00:05:00.782 ************************************ 00:05:00.782 START TEST env 00:05:00.782 ************************************ 00:05:00.782 09:36:48 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:01.041 * Looking for test storage... 00:05:01.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:01.041 09:36:48 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:01.041 09:36:48 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:01.041 09:36:48 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:01.041 09:36:48 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:01.041 09:36:48 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.041 09:36:48 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.041 09:36:48 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.041 09:36:48 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.041 09:36:48 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.041 09:36:48 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.041 09:36:48 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.041 09:36:48 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.041 09:36:48 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.041 09:36:48 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.041 09:36:48 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.041 09:36:48 env -- scripts/common.sh@344 -- # case "$op" in 00:05:01.041 09:36:48 env -- scripts/common.sh@345 -- # : 1 00:05:01.041 09:36:48 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.041 09:36:48 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.041 09:36:48 env -- scripts/common.sh@365 -- # decimal 1 00:05:01.041 09:36:48 env -- scripts/common.sh@353 -- # local d=1 00:05:01.041 09:36:48 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.041 09:36:48 env -- scripts/common.sh@355 -- # echo 1 00:05:01.041 09:36:48 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.041 09:36:48 env -- scripts/common.sh@366 -- # decimal 2 00:05:01.041 09:36:48 env -- scripts/common.sh@353 -- # local d=2 00:05:01.041 09:36:48 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.041 09:36:48 env -- scripts/common.sh@355 -- # echo 2 00:05:01.041 09:36:48 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.041 09:36:48 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.041 09:36:48 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.041 09:36:48 env -- scripts/common.sh@368 -- # return 0 00:05:01.041 09:36:48 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.041 09:36:48 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:01.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.041 --rc genhtml_branch_coverage=1 00:05:01.041 --rc genhtml_function_coverage=1 00:05:01.041 --rc genhtml_legend=1 00:05:01.041 --rc geninfo_all_blocks=1 00:05:01.041 --rc geninfo_unexecuted_blocks=1 00:05:01.041 00:05:01.041 ' 00:05:01.041 09:36:48 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:01.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.041 --rc genhtml_branch_coverage=1 00:05:01.041 --rc genhtml_function_coverage=1 00:05:01.041 --rc genhtml_legend=1 00:05:01.041 --rc geninfo_all_blocks=1 00:05:01.041 --rc geninfo_unexecuted_blocks=1 00:05:01.041 00:05:01.041 ' 00:05:01.041 09:36:48 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:01.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.041 --rc genhtml_branch_coverage=1 00:05:01.041 --rc genhtml_function_coverage=1 00:05:01.041 --rc genhtml_legend=1 00:05:01.041 --rc geninfo_all_blocks=1 00:05:01.041 --rc geninfo_unexecuted_blocks=1 00:05:01.041 00:05:01.041 ' 00:05:01.041 09:36:48 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:01.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.041 --rc genhtml_branch_coverage=1 00:05:01.041 --rc genhtml_function_coverage=1 00:05:01.041 --rc genhtml_legend=1 00:05:01.041 --rc geninfo_all_blocks=1 00:05:01.041 --rc geninfo_unexecuted_blocks=1 00:05:01.041 00:05:01.041 ' 00:05:01.041 09:36:48 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:01.041 09:36:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.041 09:36:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.041 09:36:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.041 ************************************ 00:05:01.041 START TEST env_memory 00:05:01.041 ************************************ 00:05:01.041 09:36:48 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:01.041 00:05:01.041 00:05:01.041 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.041 http://cunit.sourceforge.net/ 00:05:01.041 00:05:01.041 00:05:01.041 Suite: memory 00:05:01.300 Test: alloc and free memory map ...[2024-12-05 09:36:48.689326] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:01.300 passed 00:05:01.300 Test: mem map translation ...[2024-12-05 09:36:48.749986] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:01.300 [2024-12-05 09:36:48.750062] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:01.300 [2024-12-05 09:36:48.750157] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:01.300 [2024-12-05 09:36:48.750189] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:01.300 passed 00:05:01.300 Test: mem map registration ...[2024-12-05 09:36:48.848336] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:01.300 [2024-12-05 09:36:48.848408] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:01.300 passed 00:05:01.559 Test: mem map adjacent registrations ...passed 00:05:01.559 00:05:01.559 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.559 suites 1 1 n/a 0 0 00:05:01.559 tests 4 4 4 0 0 00:05:01.559 asserts 152 152 152 0 n/a 00:05:01.559 00:05:01.559 Elapsed time = 0.342 seconds 00:05:01.559 00:05:01.559 real 0m0.389s 00:05:01.559 user 0m0.349s 00:05:01.559 sys 0m0.031s 00:05:01.559 09:36:48 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.559 09:36:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:01.559 ************************************ 00:05:01.559 END TEST env_memory 00:05:01.559 ************************************ 00:05:01.559 09:36:49 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:01.559 09:36:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.559 09:36:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.559 09:36:49 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.559 ************************************ 00:05:01.559 START TEST env_vtophys 00:05:01.559 ************************************ 00:05:01.559 09:36:49 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:01.559 EAL: lib.eal log level changed from notice to debug 00:05:01.559 EAL: Detected lcore 0 as core 0 on socket 0 00:05:01.559 EAL: Detected lcore 1 as core 0 on socket 0 00:05:01.559 EAL: Detected lcore 2 as core 0 on socket 0 00:05:01.559 EAL: Detected lcore 3 as core 0 on socket 0 00:05:01.559 EAL: Detected lcore 4 as core 0 on socket 0 00:05:01.559 EAL: Detected lcore 5 as core 0 on socket 0 00:05:01.559 EAL: Detected lcore 6 as core 0 on socket 0 00:05:01.559 EAL: Detected lcore 7 as core 0 on socket 0 00:05:01.559 EAL: Detected lcore 8 as core 0 on socket 0 00:05:01.559 EAL: Detected lcore 9 as core 0 on socket 0 00:05:01.559 EAL: Maximum logical cores by configuration: 128 00:05:01.559 EAL: Detected CPU lcores: 10 00:05:01.559 EAL: Detected NUMA nodes: 1 00:05:01.559 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:01.559 EAL: Detected shared linkage of DPDK 00:05:01.559 EAL: No shared files mode enabled, IPC will be disabled 00:05:01.559 EAL: Selected IOVA mode 'PA' 00:05:01.559 EAL: Probing VFIO support... 00:05:01.559 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:01.559 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:01.559 EAL: Ask a virtual area of 0x2e000 bytes 00:05:01.559 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:01.559 EAL: Setting up physically contiguous memory... 00:05:01.559 EAL: Setting maximum number of open files to 524288 00:05:01.559 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:01.559 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:01.559 EAL: Ask a virtual area of 0x61000 bytes 00:05:01.559 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:01.559 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:01.559 EAL: Ask a virtual area of 0x400000000 bytes 00:05:01.559 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:01.559 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:01.559 EAL: Ask a virtual area of 0x61000 bytes 00:05:01.559 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:01.559 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:01.559 EAL: Ask a virtual area of 0x400000000 bytes 00:05:01.559 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:01.559 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:01.559 EAL: Ask a virtual area of 0x61000 bytes 00:05:01.559 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:01.559 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:01.559 EAL: Ask a virtual area of 0x400000000 bytes 00:05:01.559 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:01.559 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:01.559 EAL: Ask a virtual area of 0x61000 bytes 00:05:01.559 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:01.559 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:01.559 EAL: Ask a virtual area of 0x400000000 bytes 00:05:01.559 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:01.559 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:01.559 EAL: Hugepages will be freed exactly as allocated. 00:05:01.559 EAL: No shared files mode enabled, IPC is disabled 00:05:01.559 EAL: No shared files mode enabled, IPC is disabled 00:05:01.818 EAL: TSC frequency is ~2200000 KHz 00:05:01.818 EAL: Main lcore 0 is ready (tid=7f175369ca40;cpuset=[0]) 00:05:01.818 EAL: Trying to obtain current memory policy. 00:05:01.818 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.818 EAL: Restoring previous memory policy: 0 00:05:01.818 EAL: request: mp_malloc_sync 00:05:01.818 EAL: No shared files mode enabled, IPC is disabled 00:05:01.818 EAL: Heap on socket 0 was expanded by 2MB 00:05:01.818 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:01.818 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:01.818 EAL: Mem event callback 'spdk:(nil)' registered 00:05:01.818 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:01.818 00:05:01.818 00:05:01.818 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.818 http://cunit.sourceforge.net/ 00:05:01.818 00:05:01.818 00:05:01.818 Suite: components_suite 00:05:02.078 Test: vtophys_malloc_test ...passed 00:05:02.078 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:02.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.078 EAL: Restoring previous memory policy: 4 00:05:02.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.078 EAL: request: mp_malloc_sync 00:05:02.078 EAL: No shared files mode enabled, IPC is disabled 00:05:02.078 EAL: Heap on socket 0 was expanded by 4MB 00:05:02.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.078 EAL: request: mp_malloc_sync 00:05:02.078 EAL: No shared files mode enabled, IPC is disabled 00:05:02.078 EAL: Heap on socket 0 was shrunk by 4MB 00:05:02.078 EAL: Trying to obtain current memory policy. 00:05:02.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.078 EAL: Restoring previous memory policy: 4 00:05:02.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.078 EAL: request: mp_malloc_sync 00:05:02.078 EAL: No shared files mode enabled, IPC is disabled 00:05:02.078 EAL: Heap on socket 0 was expanded by 6MB 00:05:02.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.078 EAL: request: mp_malloc_sync 00:05:02.078 EAL: No shared files mode enabled, IPC is disabled 00:05:02.078 EAL: Heap on socket 0 was shrunk by 6MB 00:05:02.078 EAL: Trying to obtain current memory policy. 00:05:02.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.338 EAL: Restoring previous memory policy: 4 00:05:02.338 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.338 EAL: request: mp_malloc_sync 00:05:02.338 EAL: No shared files mode enabled, IPC is disabled 00:05:02.338 EAL: Heap on socket 0 was expanded by 10MB 00:05:02.338 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.338 EAL: request: mp_malloc_sync 00:05:02.338 EAL: No shared files mode enabled, IPC is disabled 00:05:02.338 EAL: Heap on socket 0 was shrunk by 10MB 00:05:02.338 EAL: Trying to obtain current memory policy. 00:05:02.338 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.338 EAL: Restoring previous memory policy: 4 00:05:02.338 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.338 EAL: request: mp_malloc_sync 00:05:02.338 EAL: No shared files mode enabled, IPC is disabled 00:05:02.338 EAL: Heap on socket 0 was expanded by 18MB 00:05:02.338 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.338 EAL: request: mp_malloc_sync 00:05:02.338 EAL: No shared files mode enabled, IPC is disabled 00:05:02.338 EAL: Heap on socket 0 was shrunk by 18MB 00:05:02.338 EAL: Trying to obtain current memory policy. 00:05:02.338 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.338 EAL: Restoring previous memory policy: 4 00:05:02.338 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.338 EAL: request: mp_malloc_sync 00:05:02.338 EAL: No shared files mode enabled, IPC is disabled 00:05:02.338 EAL: Heap on socket 0 was expanded by 34MB 00:05:02.338 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.338 EAL: request: mp_malloc_sync 00:05:02.338 EAL: No shared files mode enabled, IPC is disabled 00:05:02.338 EAL: Heap on socket 0 was shrunk by 34MB 00:05:02.338 EAL: Trying to obtain current memory policy. 00:05:02.338 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.338 EAL: Restoring previous memory policy: 4 00:05:02.338 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.338 EAL: request: mp_malloc_sync 00:05:02.338 EAL: No shared files mode enabled, IPC is disabled 00:05:02.338 EAL: Heap on socket 0 was expanded by 66MB 00:05:02.338 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.338 EAL: request: mp_malloc_sync 00:05:02.338 EAL: No shared files mode enabled, IPC is disabled 00:05:02.338 EAL: Heap on socket 0 was shrunk by 66MB 00:05:02.598 EAL: Trying to obtain current memory policy. 00:05:02.598 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.598 EAL: Restoring previous memory policy: 4 00:05:02.598 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.598 EAL: request: mp_malloc_sync 00:05:02.598 EAL: No shared files mode enabled, IPC is disabled 00:05:02.598 EAL: Heap on socket 0 was expanded by 130MB 00:05:02.857 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.857 EAL: request: mp_malloc_sync 00:05:02.857 EAL: No shared files mode enabled, IPC is disabled 00:05:02.857 EAL: Heap on socket 0 was shrunk by 130MB 00:05:02.857 EAL: Trying to obtain current memory policy. 00:05:02.857 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.857 EAL: Restoring previous memory policy: 4 00:05:02.857 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.857 EAL: request: mp_malloc_sync 00:05:02.857 EAL: No shared files mode enabled, IPC is disabled 00:05:02.857 EAL: Heap on socket 0 was expanded by 258MB 00:05:03.424 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.424 EAL: request: mp_malloc_sync 00:05:03.424 EAL: No shared files mode enabled, IPC is disabled 00:05:03.424 EAL: Heap on socket 0 was shrunk by 258MB 00:05:03.682 EAL: Trying to obtain current memory policy. 00:05:03.682 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.682 EAL: Restoring previous memory policy: 4 00:05:03.682 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.682 EAL: request: mp_malloc_sync 00:05:03.682 EAL: No shared files mode enabled, IPC is disabled 00:05:03.682 EAL: Heap on socket 0 was expanded by 514MB 00:05:04.619 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.619 EAL: request: mp_malloc_sync 00:05:04.619 EAL: No shared files mode enabled, IPC is disabled 00:05:04.619 EAL: Heap on socket 0 was shrunk by 514MB 00:05:05.186 EAL: Trying to obtain current memory policy. 00:05:05.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.445 EAL: Restoring previous memory policy: 4 00:05:05.445 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.445 EAL: request: mp_malloc_sync 00:05:05.445 EAL: No shared files mode enabled, IPC is disabled 00:05:05.445 EAL: Heap on socket 0 was expanded by 1026MB 00:05:06.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.822 EAL: request: mp_malloc_sync 00:05:06.822 EAL: No shared files mode enabled, IPC is disabled 00:05:06.822 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:08.222 passed 00:05:08.222 00:05:08.222 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.222 suites 1 1 n/a 0 0 00:05:08.222 tests 2 2 2 0 0 00:05:08.222 asserts 5649 5649 5649 0 n/a 00:05:08.222 00:05:08.222 Elapsed time = 6.153 seconds 00:05:08.222 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.222 EAL: request: mp_malloc_sync 00:05:08.222 EAL: No shared files mode enabled, IPC is disabled 00:05:08.222 EAL: Heap on socket 0 was shrunk by 2MB 00:05:08.222 EAL: No shared files mode enabled, IPC is disabled 00:05:08.222 EAL: No shared files mode enabled, IPC is disabled 00:05:08.222 EAL: No shared files mode enabled, IPC is disabled 00:05:08.222 00:05:08.222 real 0m6.485s 00:05:08.222 user 0m5.392s 00:05:08.222 sys 0m0.939s 00:05:08.222 09:36:55 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.222 09:36:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:08.222 ************************************ 00:05:08.222 END TEST env_vtophys 00:05:08.222 ************************************ 00:05:08.222 09:36:55 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:08.222 09:36:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.222 09:36:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.222 09:36:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.222 ************************************ 00:05:08.222 START TEST env_pci 00:05:08.222 ************************************ 00:05:08.222 09:36:55 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:08.222 00:05:08.222 00:05:08.222 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.222 http://cunit.sourceforge.net/ 00:05:08.222 00:05:08.222 00:05:08.222 Suite: pci 00:05:08.222 Test: pci_hook ...[2024-12-05 09:36:55.636064] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58982 has claimed it 00:05:08.222 passed 00:05:08.222 00:05:08.222 EAL: Cannot find device (10000:00:01.0) 00:05:08.222 EAL: Failed to attach device on primary process 00:05:08.222 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.222 suites 1 1 n/a 0 0 00:05:08.222 tests 1 1 1 0 0 00:05:08.222 asserts 25 25 25 0 n/a 00:05:08.222 00:05:08.222 Elapsed time = 0.008 seconds 00:05:08.222 00:05:08.222 real 0m0.088s 00:05:08.222 user 0m0.040s 00:05:08.222 sys 0m0.047s 00:05:08.222 09:36:55 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.222 09:36:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:08.222 ************************************ 00:05:08.222 END TEST env_pci 00:05:08.222 ************************************ 00:05:08.222 09:36:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:08.222 09:36:55 env -- env/env.sh@15 -- # uname 00:05:08.222 09:36:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:08.222 09:36:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:08.222 09:36:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:08.222 09:36:55 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:08.222 09:36:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.222 09:36:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.222 ************************************ 00:05:08.222 START TEST env_dpdk_post_init 00:05:08.222 ************************************ 00:05:08.222 09:36:55 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:08.222 EAL: Detected CPU lcores: 10 00:05:08.222 EAL: Detected NUMA nodes: 1 00:05:08.222 EAL: Detected shared linkage of DPDK 00:05:08.222 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:08.222 EAL: Selected IOVA mode 'PA' 00:05:08.480 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:08.480 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:08.480 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:08.480 Starting DPDK initialization... 00:05:08.480 Starting SPDK post initialization... 00:05:08.480 SPDK NVMe probe 00:05:08.480 Attaching to 0000:00:10.0 00:05:08.480 Attaching to 0000:00:11.0 00:05:08.480 Attached to 0000:00:10.0 00:05:08.480 Attached to 0000:00:11.0 00:05:08.480 Cleaning up... 00:05:08.480 00:05:08.480 real 0m0.278s 00:05:08.480 user 0m0.088s 00:05:08.480 sys 0m0.090s 00:05:08.480 09:36:56 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.480 09:36:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:08.480 ************************************ 00:05:08.480 END TEST env_dpdk_post_init 00:05:08.480 ************************************ 00:05:08.480 09:36:56 env -- env/env.sh@26 -- # uname 00:05:08.480 09:36:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:08.480 09:36:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:08.480 09:36:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.480 09:36:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.480 09:36:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.480 ************************************ 00:05:08.480 START TEST env_mem_callbacks 00:05:08.480 ************************************ 00:05:08.480 09:36:56 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:08.738 EAL: Detected CPU lcores: 10 00:05:08.738 EAL: Detected NUMA nodes: 1 00:05:08.738 EAL: Detected shared linkage of DPDK 00:05:08.738 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:08.738 EAL: Selected IOVA mode 'PA' 00:05:08.738 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:08.738 00:05:08.738 00:05:08.738 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.738 http://cunit.sourceforge.net/ 00:05:08.738 00:05:08.738 00:05:08.738 Suite: memory 00:05:08.738 Test: test ... 00:05:08.738 register 0x200000200000 2097152 00:05:08.738 malloc 3145728 00:05:08.738 register 0x200000400000 4194304 00:05:08.738 buf 0x2000004fffc0 len 3145728 PASSED 00:05:08.738 malloc 64 00:05:08.738 buf 0x2000004ffec0 len 64 PASSED 00:05:08.738 malloc 4194304 00:05:08.738 register 0x200000800000 6291456 00:05:08.738 buf 0x2000009fffc0 len 4194304 PASSED 00:05:08.738 free 0x2000004fffc0 3145728 00:05:08.738 free 0x2000004ffec0 64 00:05:08.738 unregister 0x200000400000 4194304 PASSED 00:05:08.738 free 0x2000009fffc0 4194304 00:05:08.738 unregister 0x200000800000 6291456 PASSED 00:05:08.738 malloc 8388608 00:05:08.738 register 0x200000400000 10485760 00:05:08.738 buf 0x2000005fffc0 len 8388608 PASSED 00:05:08.738 free 0x2000005fffc0 8388608 00:05:08.738 unregister 0x200000400000 10485760 PASSED 00:05:08.738 passed 00:05:08.738 00:05:08.738 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.738 suites 1 1 n/a 0 0 00:05:08.738 tests 1 1 1 0 0 00:05:08.738 asserts 15 15 15 0 n/a 00:05:08.738 00:05:08.738 Elapsed time = 0.058 seconds 00:05:08.738 00:05:08.738 real 0m0.259s 00:05:08.738 user 0m0.091s 00:05:08.738 sys 0m0.066s 00:05:08.738 09:36:56 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.738 09:36:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:08.738 ************************************ 00:05:08.738 END TEST env_mem_callbacks 00:05:08.738 ************************************ 00:05:08.997 00:05:08.997 real 0m7.980s 00:05:08.997 user 0m6.145s 00:05:08.997 sys 0m1.433s 00:05:08.997 09:36:56 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.997 09:36:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.997 ************************************ 00:05:08.997 END TEST env 00:05:08.997 ************************************ 00:05:08.997 09:36:56 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:08.997 09:36:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.997 09:36:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.997 09:36:56 -- common/autotest_common.sh@10 -- # set +x 00:05:08.997 ************************************ 00:05:08.997 START TEST rpc 00:05:08.997 ************************************ 00:05:08.997 09:36:56 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:08.997 * Looking for test storage... 00:05:08.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:08.997 09:36:56 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.997 09:36:56 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.997 09:36:56 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:08.997 09:36:56 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:08.997 09:36:56 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.997 09:36:56 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.997 09:36:56 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.997 09:36:56 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.997 09:36:56 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.997 09:36:56 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.997 09:36:56 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.997 09:36:56 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.997 09:36:56 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.997 09:36:56 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.997 09:36:56 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.997 09:36:56 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:08.997 09:36:56 rpc -- scripts/common.sh@345 -- # : 1 00:05:08.997 09:36:56 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.997 09:36:56 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.997 09:36:56 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:08.997 09:36:56 rpc -- scripts/common.sh@353 -- # local d=1 00:05:08.997 09:36:56 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.997 09:36:56 rpc -- scripts/common.sh@355 -- # echo 1 00:05:08.997 09:36:56 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.997 09:36:56 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:08.997 09:36:56 rpc -- scripts/common.sh@353 -- # local d=2 00:05:08.997 09:36:56 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.997 09:36:56 rpc -- scripts/common.sh@355 -- # echo 2 00:05:08.997 09:36:56 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.255 09:36:56 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.255 09:36:56 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.255 09:36:56 rpc -- scripts/common.sh@368 -- # return 0 00:05:09.255 09:36:56 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.255 09:36:56 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.255 --rc genhtml_branch_coverage=1 00:05:09.255 --rc genhtml_function_coverage=1 00:05:09.255 --rc genhtml_legend=1 00:05:09.255 --rc geninfo_all_blocks=1 00:05:09.255 --rc geninfo_unexecuted_blocks=1 00:05:09.255 00:05:09.255 ' 00:05:09.255 09:36:56 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.255 --rc genhtml_branch_coverage=1 00:05:09.255 --rc genhtml_function_coverage=1 00:05:09.255 --rc genhtml_legend=1 00:05:09.255 --rc geninfo_all_blocks=1 00:05:09.255 --rc geninfo_unexecuted_blocks=1 00:05:09.255 00:05:09.255 ' 00:05:09.255 09:36:56 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.255 --rc genhtml_branch_coverage=1 00:05:09.255 --rc genhtml_function_coverage=1 00:05:09.255 --rc genhtml_legend=1 00:05:09.255 --rc geninfo_all_blocks=1 00:05:09.255 --rc geninfo_unexecuted_blocks=1 00:05:09.255 00:05:09.255 ' 00:05:09.255 09:36:56 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.255 --rc genhtml_branch_coverage=1 00:05:09.255 --rc genhtml_function_coverage=1 00:05:09.255 --rc genhtml_legend=1 00:05:09.255 --rc geninfo_all_blocks=1 00:05:09.255 --rc geninfo_unexecuted_blocks=1 00:05:09.255 00:05:09.255 ' 00:05:09.255 09:36:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=59109 00:05:09.255 09:36:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.255 09:36:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 59109 00:05:09.255 09:36:56 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:09.255 09:36:56 rpc -- common/autotest_common.sh@835 -- # '[' -z 59109 ']' 00:05:09.255 09:36:56 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.255 09:36:56 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.255 09:36:56 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.255 09:36:56 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.255 09:36:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.255 [2024-12-05 09:36:56.779687] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:09.255 [2024-12-05 09:36:56.779882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59109 ] 00:05:09.514 [2024-12-05 09:36:56.964501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.514 [2024-12-05 09:36:57.074787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:09.514 [2024-12-05 09:36:57.074881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 59109' to capture a snapshot of events at runtime. 00:05:09.514 [2024-12-05 09:36:57.074897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:09.514 [2024-12-05 09:36:57.074915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:09.514 [2024-12-05 09:36:57.074925] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid59109 for offline analysis/debug. 00:05:09.514 [2024-12-05 09:36:57.076286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.449 09:36:57 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.449 09:36:57 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:10.449 09:36:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:10.449 09:36:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:10.449 09:36:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:10.449 09:36:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:10.449 09:36:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.449 09:36:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.449 09:36:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.449 ************************************ 00:05:10.449 START TEST rpc_integrity 00:05:10.449 ************************************ 00:05:10.449 09:36:57 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:10.449 09:36:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:10.449 09:36:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.449 09:36:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.449 09:36:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.449 09:36:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:10.449 09:36:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:10.449 09:36:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:10.449 09:36:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:10.449 09:36:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.449 09:36:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.449 09:36:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.449 09:36:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:10.449 09:36:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:10.449 09:36:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.449 09:36:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.449 09:36:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.449 09:36:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:10.449 { 00:05:10.449 "aliases": [ 00:05:10.449 "5b6222ad-3932-453d-b643-df010acc0e62" 00:05:10.449 ], 00:05:10.449 "assigned_rate_limits": { 00:05:10.449 "r_mbytes_per_sec": 0, 00:05:10.449 "rw_ios_per_sec": 0, 00:05:10.449 "rw_mbytes_per_sec": 0, 00:05:10.449 "w_mbytes_per_sec": 0 00:05:10.449 }, 00:05:10.449 "block_size": 512, 00:05:10.449 "claimed": false, 00:05:10.449 "driver_specific": {}, 00:05:10.449 "memory_domains": [ 00:05:10.449 { 00:05:10.449 "dma_device_id": "system", 00:05:10.449 "dma_device_type": 1 00:05:10.449 }, 00:05:10.449 { 00:05:10.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.449 "dma_device_type": 2 00:05:10.449 } 00:05:10.449 ], 00:05:10.449 "name": "Malloc0", 00:05:10.449 "num_blocks": 16384, 00:05:10.449 "product_name": "Malloc disk", 00:05:10.449 "supported_io_types": { 00:05:10.449 "abort": true, 00:05:10.449 "compare": false, 00:05:10.449 "compare_and_write": false, 00:05:10.449 "copy": true, 00:05:10.449 "flush": true, 00:05:10.449 "get_zone_info": false, 00:05:10.449 "nvme_admin": false, 00:05:10.449 "nvme_io": false, 00:05:10.449 "nvme_io_md": false, 00:05:10.449 "nvme_iov_md": false, 00:05:10.449 "read": true, 00:05:10.449 "reset": true, 00:05:10.449 "seek_data": false, 00:05:10.449 "seek_hole": false, 00:05:10.449 "unmap": true, 00:05:10.449 "write": true, 00:05:10.449 "write_zeroes": true, 00:05:10.449 "zcopy": true, 00:05:10.449 "zone_append": false, 00:05:10.449 "zone_management": false 00:05:10.449 }, 00:05:10.449 "uuid": "5b6222ad-3932-453d-b643-df010acc0e62", 00:05:10.449 "zoned": false 00:05:10.449 } 00:05:10.449 ]' 00:05:10.449 09:36:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:10.449 09:36:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:10.449 09:36:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:10.449 09:36:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.449 09:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.449 [2024-12-05 09:36:58.020374] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:10.449 [2024-12-05 09:36:58.020461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:10.449 [2024-12-05 09:36:58.020511] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:05:10.449 [2024-12-05 09:36:58.020538] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:10.449 [2024-12-05 09:36:58.023588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:10.449 [2024-12-05 09:36:58.023659] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:10.449 Passthru0 00:05:10.449 09:36:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.449 09:36:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:10.449 09:36:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.449 09:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.449 09:36:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.449 09:36:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:10.449 { 00:05:10.449 "aliases": [ 00:05:10.449 "5b6222ad-3932-453d-b643-df010acc0e62" 00:05:10.449 ], 00:05:10.449 "assigned_rate_limits": { 00:05:10.449 "r_mbytes_per_sec": 0, 00:05:10.449 "rw_ios_per_sec": 0, 00:05:10.449 "rw_mbytes_per_sec": 0, 00:05:10.449 "w_mbytes_per_sec": 0 00:05:10.449 }, 00:05:10.449 "block_size": 512, 00:05:10.449 "claim_type": "exclusive_write", 00:05:10.449 "claimed": true, 00:05:10.449 "driver_specific": {}, 00:05:10.449 "memory_domains": [ 00:05:10.449 { 00:05:10.449 "dma_device_id": "system", 00:05:10.449 "dma_device_type": 1 00:05:10.449 }, 00:05:10.449 { 00:05:10.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.449 "dma_device_type": 2 00:05:10.449 } 00:05:10.449 ], 00:05:10.449 "name": "Malloc0", 00:05:10.449 "num_blocks": 16384, 00:05:10.449 "product_name": "Malloc disk", 00:05:10.449 "supported_io_types": { 00:05:10.449 "abort": true, 00:05:10.449 "compare": false, 00:05:10.449 "compare_and_write": false, 00:05:10.449 "copy": true, 00:05:10.449 "flush": true, 00:05:10.449 "get_zone_info": false, 00:05:10.449 "nvme_admin": false, 00:05:10.449 "nvme_io": false, 00:05:10.449 "nvme_io_md": false, 00:05:10.449 "nvme_iov_md": false, 00:05:10.449 "read": true, 00:05:10.449 "reset": true, 00:05:10.449 "seek_data": false, 00:05:10.449 "seek_hole": false, 00:05:10.449 "unmap": true, 00:05:10.449 "write": true, 00:05:10.449 "write_zeroes": true, 00:05:10.449 "zcopy": true, 00:05:10.449 "zone_append": false, 00:05:10.449 "zone_management": false 00:05:10.449 }, 00:05:10.449 "uuid": "5b6222ad-3932-453d-b643-df010acc0e62", 00:05:10.449 "zoned": false 00:05:10.449 }, 00:05:10.449 { 00:05:10.449 "aliases": [ 00:05:10.449 "03f01055-553c-56e6-9b20-7269345c43cd" 00:05:10.449 ], 00:05:10.449 "assigned_rate_limits": { 00:05:10.449 "r_mbytes_per_sec": 0, 00:05:10.449 "rw_ios_per_sec": 0, 00:05:10.449 "rw_mbytes_per_sec": 0, 00:05:10.449 "w_mbytes_per_sec": 0 00:05:10.449 }, 00:05:10.449 "block_size": 512, 00:05:10.449 "claimed": false, 00:05:10.449 "driver_specific": { 00:05:10.449 "passthru": { 00:05:10.449 "base_bdev_name": "Malloc0", 00:05:10.449 "name": "Passthru0" 00:05:10.449 } 00:05:10.449 }, 00:05:10.449 "memory_domains": [ 00:05:10.449 { 00:05:10.449 "dma_device_id": "system", 00:05:10.449 "dma_device_type": 1 00:05:10.449 }, 00:05:10.449 { 00:05:10.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.449 "dma_device_type": 2 00:05:10.449 } 00:05:10.449 ], 00:05:10.449 "name": "Passthru0", 00:05:10.449 "num_blocks": 16384, 00:05:10.449 "product_name": "passthru", 00:05:10.449 "supported_io_types": { 00:05:10.449 "abort": true, 00:05:10.449 "compare": false, 00:05:10.449 "compare_and_write": false, 00:05:10.449 "copy": true, 00:05:10.449 "flush": true, 00:05:10.449 "get_zone_info": false, 00:05:10.449 "nvme_admin": false, 00:05:10.449 "nvme_io": false, 00:05:10.449 "nvme_io_md": false, 00:05:10.449 "nvme_iov_md": false, 00:05:10.449 "read": true, 00:05:10.449 "reset": true, 00:05:10.449 "seek_data": false, 00:05:10.449 "seek_hole": false, 00:05:10.449 "unmap": true, 00:05:10.449 "write": true, 00:05:10.450 "write_zeroes": true, 00:05:10.450 "zcopy": true, 00:05:10.450 "zone_append": false, 00:05:10.450 "zone_management": false 00:05:10.450 }, 00:05:10.450 "uuid": "03f01055-553c-56e6-9b20-7269345c43cd", 00:05:10.450 "zoned": false 00:05:10.450 } 00:05:10.450 ]' 00:05:10.450 09:36:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:10.708 09:36:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:10.708 09:36:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:10.708 09:36:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.708 09:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.708 09:36:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.708 09:36:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:10.708 09:36:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.708 09:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.708 09:36:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.708 09:36:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:10.708 09:36:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.708 09:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.708 09:36:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.708 09:36:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:10.708 09:36:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:10.708 ************************************ 00:05:10.708 END TEST rpc_integrity 00:05:10.708 ************************************ 00:05:10.708 09:36:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:10.708 00:05:10.708 real 0m0.337s 00:05:10.708 user 0m0.207s 00:05:10.708 sys 0m0.037s 00:05:10.708 09:36:58 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.708 09:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.708 09:36:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:10.708 09:36:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.708 09:36:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.708 09:36:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.708 ************************************ 00:05:10.708 START TEST rpc_plugins 00:05:10.708 ************************************ 00:05:10.708 09:36:58 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:10.708 09:36:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:10.708 09:36:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.708 09:36:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.708 09:36:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.708 09:36:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:10.708 09:36:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:10.708 09:36:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.708 09:36:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.708 09:36:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.708 09:36:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:10.708 { 00:05:10.708 "aliases": [ 00:05:10.708 "2b13fe10-7919-40cf-be33-54907b9cee59" 00:05:10.708 ], 00:05:10.708 "assigned_rate_limits": { 00:05:10.708 "r_mbytes_per_sec": 0, 00:05:10.708 "rw_ios_per_sec": 0, 00:05:10.708 "rw_mbytes_per_sec": 0, 00:05:10.708 "w_mbytes_per_sec": 0 00:05:10.708 }, 00:05:10.708 "block_size": 4096, 00:05:10.708 "claimed": false, 00:05:10.708 "driver_specific": {}, 00:05:10.708 "memory_domains": [ 00:05:10.708 { 00:05:10.708 "dma_device_id": "system", 00:05:10.708 "dma_device_type": 1 00:05:10.708 }, 00:05:10.708 { 00:05:10.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.708 "dma_device_type": 2 00:05:10.708 } 00:05:10.708 ], 00:05:10.708 "name": "Malloc1", 00:05:10.708 "num_blocks": 256, 00:05:10.708 "product_name": "Malloc disk", 00:05:10.708 "supported_io_types": { 00:05:10.708 "abort": true, 00:05:10.708 "compare": false, 00:05:10.708 "compare_and_write": false, 00:05:10.708 "copy": true, 00:05:10.708 "flush": true, 00:05:10.708 "get_zone_info": false, 00:05:10.708 "nvme_admin": false, 00:05:10.708 "nvme_io": false, 00:05:10.708 "nvme_io_md": false, 00:05:10.708 "nvme_iov_md": false, 00:05:10.708 "read": true, 00:05:10.708 "reset": true, 00:05:10.708 "seek_data": false, 00:05:10.708 "seek_hole": false, 00:05:10.708 "unmap": true, 00:05:10.708 "write": true, 00:05:10.708 "write_zeroes": true, 00:05:10.708 "zcopy": true, 00:05:10.708 "zone_append": false, 00:05:10.708 "zone_management": false 00:05:10.708 }, 00:05:10.708 "uuid": "2b13fe10-7919-40cf-be33-54907b9cee59", 00:05:10.708 "zoned": false 00:05:10.708 } 00:05:10.708 ]' 00:05:10.708 09:36:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:10.708 09:36:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:10.708 09:36:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:10.708 09:36:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.708 09:36:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.967 09:36:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.967 09:36:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:10.967 09:36:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.967 09:36:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.967 09:36:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.967 09:36:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:10.967 09:36:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:10.967 09:36:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:10.967 00:05:10.967 real 0m0.152s 00:05:10.967 user 0m0.094s 00:05:10.967 sys 0m0.021s 00:05:10.967 09:36:58 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.967 09:36:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.967 ************************************ 00:05:10.967 END TEST rpc_plugins 00:05:10.967 ************************************ 00:05:10.967 09:36:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:10.967 09:36:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.967 09:36:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.967 09:36:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.967 ************************************ 00:05:10.967 START TEST rpc_trace_cmd_test 00:05:10.967 ************************************ 00:05:10.967 09:36:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:10.967 09:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:10.967 09:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:10.967 09:36:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.967 09:36:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:10.967 09:36:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.967 09:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:10.967 "bdev": { 00:05:10.967 "mask": "0x8", 00:05:10.967 "tpoint_mask": "0xffffffffffffffff" 00:05:10.967 }, 00:05:10.967 "bdev_nvme": { 00:05:10.967 "mask": "0x4000", 00:05:10.967 "tpoint_mask": "0x0" 00:05:10.967 }, 00:05:10.967 "bdev_raid": { 00:05:10.967 "mask": "0x20000", 00:05:10.967 "tpoint_mask": "0x0" 00:05:10.967 }, 00:05:10.967 "blob": { 00:05:10.967 "mask": "0x10000", 00:05:10.967 "tpoint_mask": "0x0" 00:05:10.967 }, 00:05:10.967 "blobfs": { 00:05:10.967 "mask": "0x80", 00:05:10.967 "tpoint_mask": "0x0" 00:05:10.967 }, 00:05:10.967 "dsa": { 00:05:10.967 "mask": "0x200", 00:05:10.967 "tpoint_mask": "0x0" 00:05:10.967 }, 00:05:10.967 "ftl": { 00:05:10.967 "mask": "0x40", 00:05:10.967 "tpoint_mask": "0x0" 00:05:10.967 }, 00:05:10.967 "iaa": { 00:05:10.967 "mask": "0x1000", 00:05:10.967 "tpoint_mask": "0x0" 00:05:10.967 }, 00:05:10.967 "iscsi_conn": { 00:05:10.967 "mask": "0x2", 00:05:10.967 "tpoint_mask": "0x0" 00:05:10.967 }, 00:05:10.967 "nvme_pcie": { 00:05:10.967 "mask": "0x800", 00:05:10.967 "tpoint_mask": "0x0" 00:05:10.967 }, 00:05:10.967 "nvme_tcp": { 00:05:10.967 "mask": "0x2000", 00:05:10.967 "tpoint_mask": "0x0" 00:05:10.967 }, 00:05:10.967 "nvmf_rdma": { 00:05:10.967 "mask": "0x10", 00:05:10.967 "tpoint_mask": "0x0" 00:05:10.967 }, 00:05:10.967 "nvmf_tcp": { 00:05:10.967 "mask": "0x20", 00:05:10.967 "tpoint_mask": "0x0" 00:05:10.967 }, 00:05:10.967 "scheduler": { 00:05:10.967 "mask": "0x40000", 00:05:10.967 "tpoint_mask": "0x0" 00:05:10.967 }, 00:05:10.967 "scsi": { 00:05:10.967 "mask": "0x4", 00:05:10.967 "tpoint_mask": "0x0" 00:05:10.967 }, 00:05:10.967 "sock": { 00:05:10.967 "mask": "0x8000", 00:05:10.967 "tpoint_mask": "0x0" 00:05:10.967 }, 00:05:10.967 "thread": { 00:05:10.967 "mask": "0x400", 00:05:10.967 "tpoint_mask": "0x0" 00:05:10.967 }, 00:05:10.967 "tpoint_group_mask": "0x8", 00:05:10.967 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid59109" 00:05:10.967 }' 00:05:10.967 09:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:10.967 09:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:10.967 09:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:10.967 09:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:10.967 09:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:11.226 09:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:11.226 09:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:11.226 09:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:11.226 09:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:11.226 09:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:11.226 00:05:11.226 real 0m0.282s 00:05:11.226 user 0m0.241s 00:05:11.226 sys 0m0.028s 00:05:11.226 09:36:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.226 ************************************ 00:05:11.226 09:36:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.226 END TEST rpc_trace_cmd_test 00:05:11.226 ************************************ 00:05:11.226 09:36:58 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:11.226 09:36:58 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:11.226 09:36:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.226 09:36:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.226 09:36:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.226 ************************************ 00:05:11.226 START TEST go_rpc 00:05:11.226 ************************************ 00:05:11.226 09:36:58 rpc.go_rpc -- common/autotest_common.sh@1129 -- # go_rpc 00:05:11.226 09:36:58 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:11.226 09:36:58 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:11.226 09:36:58 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:05:11.485 09:36:58 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:11.485 09:36:58 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:11.485 09:36:58 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.485 09:36:58 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.485 09:36:58 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.485 09:36:58 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:11.485 09:36:58 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:11.486 09:36:58 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["0e9e16ca-0902-4498-a2c1-e6f73b508af4"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"0e9e16ca-0902-4498-a2c1-e6f73b508af4","zoned":false}]' 00:05:11.486 09:36:58 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:05:11.486 09:36:58 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:11.486 09:36:58 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:11.486 09:36:58 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.486 09:36:58 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.486 09:36:58 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.486 09:36:58 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:11.486 09:36:58 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:11.486 09:36:58 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:05:11.486 ************************************ 00:05:11.486 END TEST go_rpc 00:05:11.486 ************************************ 00:05:11.486 09:36:59 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:11.486 00:05:11.486 real 0m0.248s 00:05:11.486 user 0m0.157s 00:05:11.486 sys 0m0.029s 00:05:11.486 09:36:59 rpc.go_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.486 09:36:59 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.486 09:36:59 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:11.486 09:36:59 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:11.486 09:36:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.486 09:36:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.486 09:36:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.486 ************************************ 00:05:11.486 START TEST rpc_daemon_integrity 00:05:11.486 ************************************ 00:05:11.486 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:11.486 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:11.486 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.486 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.486 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.486 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:11.486 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:11.745 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:11.745 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:11.745 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.745 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.745 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.745 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:11.745 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:11.745 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.745 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.745 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.745 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:11.745 { 00:05:11.745 "aliases": [ 00:05:11.745 "413cab45-dd75-4503-b53f-d520001e6d81" 00:05:11.745 ], 00:05:11.745 "assigned_rate_limits": { 00:05:11.746 "r_mbytes_per_sec": 0, 00:05:11.746 "rw_ios_per_sec": 0, 00:05:11.746 "rw_mbytes_per_sec": 0, 00:05:11.746 "w_mbytes_per_sec": 0 00:05:11.746 }, 00:05:11.746 "block_size": 512, 00:05:11.746 "claimed": false, 00:05:11.746 "driver_specific": {}, 00:05:11.746 "memory_domains": [ 00:05:11.746 { 00:05:11.746 "dma_device_id": "system", 00:05:11.746 "dma_device_type": 1 00:05:11.746 }, 00:05:11.746 { 00:05:11.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.746 "dma_device_type": 2 00:05:11.746 } 00:05:11.746 ], 00:05:11.746 "name": "Malloc3", 00:05:11.746 "num_blocks": 16384, 00:05:11.746 "product_name": "Malloc disk", 00:05:11.746 "supported_io_types": { 00:05:11.746 "abort": true, 00:05:11.746 "compare": false, 00:05:11.746 "compare_and_write": false, 00:05:11.746 "copy": true, 00:05:11.746 "flush": true, 00:05:11.746 "get_zone_info": false, 00:05:11.746 "nvme_admin": false, 00:05:11.746 "nvme_io": false, 00:05:11.746 "nvme_io_md": false, 00:05:11.746 "nvme_iov_md": false, 00:05:11.746 "read": true, 00:05:11.746 "reset": true, 00:05:11.746 "seek_data": false, 00:05:11.746 "seek_hole": false, 00:05:11.746 "unmap": true, 00:05:11.746 "write": true, 00:05:11.746 "write_zeroes": true, 00:05:11.746 "zcopy": true, 00:05:11.746 "zone_append": false, 00:05:11.746 "zone_management": false 00:05:11.746 }, 00:05:11.746 "uuid": "413cab45-dd75-4503-b53f-d520001e6d81", 00:05:11.746 "zoned": false 00:05:11.746 } 00:05:11.746 ]' 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.746 [2024-12-05 09:36:59.242893] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:11.746 [2024-12-05 09:36:59.243003] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:11.746 [2024-12-05 09:36:59.243035] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:05:11.746 [2024-12-05 09:36:59.243053] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:11.746 [2024-12-05 09:36:59.245906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:11.746 [2024-12-05 09:36:59.245997] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:11.746 Passthru0 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:11.746 { 00:05:11.746 "aliases": [ 00:05:11.746 "413cab45-dd75-4503-b53f-d520001e6d81" 00:05:11.746 ], 00:05:11.746 "assigned_rate_limits": { 00:05:11.746 "r_mbytes_per_sec": 0, 00:05:11.746 "rw_ios_per_sec": 0, 00:05:11.746 "rw_mbytes_per_sec": 0, 00:05:11.746 "w_mbytes_per_sec": 0 00:05:11.746 }, 00:05:11.746 "block_size": 512, 00:05:11.746 "claim_type": "exclusive_write", 00:05:11.746 "claimed": true, 00:05:11.746 "driver_specific": {}, 00:05:11.746 "memory_domains": [ 00:05:11.746 { 00:05:11.746 "dma_device_id": "system", 00:05:11.746 "dma_device_type": 1 00:05:11.746 }, 00:05:11.746 { 00:05:11.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.746 "dma_device_type": 2 00:05:11.746 } 00:05:11.746 ], 00:05:11.746 "name": "Malloc3", 00:05:11.746 "num_blocks": 16384, 00:05:11.746 "product_name": "Malloc disk", 00:05:11.746 "supported_io_types": { 00:05:11.746 "abort": true, 00:05:11.746 "compare": false, 00:05:11.746 "compare_and_write": false, 00:05:11.746 "copy": true, 00:05:11.746 "flush": true, 00:05:11.746 "get_zone_info": false, 00:05:11.746 "nvme_admin": false, 00:05:11.746 "nvme_io": false, 00:05:11.746 "nvme_io_md": false, 00:05:11.746 "nvme_iov_md": false, 00:05:11.746 "read": true, 00:05:11.746 "reset": true, 00:05:11.746 "seek_data": false, 00:05:11.746 "seek_hole": false, 00:05:11.746 "unmap": true, 00:05:11.746 "write": true, 00:05:11.746 "write_zeroes": true, 00:05:11.746 "zcopy": true, 00:05:11.746 "zone_append": false, 00:05:11.746 "zone_management": false 00:05:11.746 }, 00:05:11.746 "uuid": "413cab45-dd75-4503-b53f-d520001e6d81", 00:05:11.746 "zoned": false 00:05:11.746 }, 00:05:11.746 { 00:05:11.746 "aliases": [ 00:05:11.746 "c2399159-69ff-5c76-8944-e078e4b5cc71" 00:05:11.746 ], 00:05:11.746 "assigned_rate_limits": { 00:05:11.746 "r_mbytes_per_sec": 0, 00:05:11.746 "rw_ios_per_sec": 0, 00:05:11.746 "rw_mbytes_per_sec": 0, 00:05:11.746 "w_mbytes_per_sec": 0 00:05:11.746 }, 00:05:11.746 "block_size": 512, 00:05:11.746 "claimed": false, 00:05:11.746 "driver_specific": { 00:05:11.746 "passthru": { 00:05:11.746 "base_bdev_name": "Malloc3", 00:05:11.746 "name": "Passthru0" 00:05:11.746 } 00:05:11.746 }, 00:05:11.746 "memory_domains": [ 00:05:11.746 { 00:05:11.746 "dma_device_id": "system", 00:05:11.746 "dma_device_type": 1 00:05:11.746 }, 00:05:11.746 { 00:05:11.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.746 "dma_device_type": 2 00:05:11.746 } 00:05:11.746 ], 00:05:11.746 "name": "Passthru0", 00:05:11.746 "num_blocks": 16384, 00:05:11.746 "product_name": "passthru", 00:05:11.746 "supported_io_types": { 00:05:11.746 "abort": true, 00:05:11.746 "compare": false, 00:05:11.746 "compare_and_write": false, 00:05:11.746 "copy": true, 00:05:11.746 "flush": true, 00:05:11.746 "get_zone_info": false, 00:05:11.746 "nvme_admin": false, 00:05:11.746 "nvme_io": false, 00:05:11.746 "nvme_io_md": false, 00:05:11.746 "nvme_iov_md": false, 00:05:11.746 "read": true, 00:05:11.746 "reset": true, 00:05:11.746 "seek_data": false, 00:05:11.746 "seek_hole": false, 00:05:11.746 "unmap": true, 00:05:11.746 "write": true, 00:05:11.746 "write_zeroes": true, 00:05:11.746 "zcopy": true, 00:05:11.746 "zone_append": false, 00:05:11.746 "zone_management": false 00:05:11.746 }, 00:05:11.746 "uuid": "c2399159-69ff-5c76-8944-e078e4b5cc71", 00:05:11.746 "zoned": false 00:05:11.746 } 00:05:11.746 ]' 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.746 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:12.005 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.005 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.005 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.005 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:12.005 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:12.005 09:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:12.005 00:05:12.005 real 0m0.354s 00:05:12.005 user 0m0.220s 00:05:12.005 sys 0m0.038s 00:05:12.005 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.005 09:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.005 ************************************ 00:05:12.005 END TEST rpc_daemon_integrity 00:05:12.005 ************************************ 00:05:12.005 09:36:59 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:12.005 09:36:59 rpc -- rpc/rpc.sh@84 -- # killprocess 59109 00:05:12.005 09:36:59 rpc -- common/autotest_common.sh@954 -- # '[' -z 59109 ']' 00:05:12.005 09:36:59 rpc -- common/autotest_common.sh@958 -- # kill -0 59109 00:05:12.005 09:36:59 rpc -- common/autotest_common.sh@959 -- # uname 00:05:12.005 09:36:59 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.005 09:36:59 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59109 00:05:12.005 09:36:59 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.005 09:36:59 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.005 killing process with pid 59109 00:05:12.005 09:36:59 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59109' 00:05:12.005 09:36:59 rpc -- common/autotest_common.sh@973 -- # kill 59109 00:05:12.005 09:36:59 rpc -- common/autotest_common.sh@978 -- # wait 59109 00:05:13.909 00:05:13.909 real 0m4.998s 00:05:13.909 user 0m5.820s 00:05:13.909 sys 0m0.966s 00:05:13.909 09:37:01 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.909 ************************************ 00:05:13.909 END TEST rpc 00:05:13.909 ************************************ 00:05:13.909 09:37:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.909 09:37:01 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:13.909 09:37:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.909 09:37:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.909 09:37:01 -- common/autotest_common.sh@10 -- # set +x 00:05:13.909 ************************************ 00:05:13.909 START TEST skip_rpc 00:05:13.909 ************************************ 00:05:13.909 09:37:01 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:14.168 * Looking for test storage... 00:05:14.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:14.168 09:37:01 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:14.168 09:37:01 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:14.168 09:37:01 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:14.168 09:37:01 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.168 09:37:01 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:14.168 09:37:01 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.168 09:37:01 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:14.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.168 --rc genhtml_branch_coverage=1 00:05:14.168 --rc genhtml_function_coverage=1 00:05:14.168 --rc genhtml_legend=1 00:05:14.168 --rc geninfo_all_blocks=1 00:05:14.168 --rc geninfo_unexecuted_blocks=1 00:05:14.168 00:05:14.168 ' 00:05:14.168 09:37:01 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:14.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.168 --rc genhtml_branch_coverage=1 00:05:14.168 --rc genhtml_function_coverage=1 00:05:14.168 --rc genhtml_legend=1 00:05:14.168 --rc geninfo_all_blocks=1 00:05:14.168 --rc geninfo_unexecuted_blocks=1 00:05:14.168 00:05:14.168 ' 00:05:14.168 09:37:01 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:14.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.168 --rc genhtml_branch_coverage=1 00:05:14.168 --rc genhtml_function_coverage=1 00:05:14.168 --rc genhtml_legend=1 00:05:14.168 --rc geninfo_all_blocks=1 00:05:14.168 --rc geninfo_unexecuted_blocks=1 00:05:14.168 00:05:14.168 ' 00:05:14.168 09:37:01 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:14.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.168 --rc genhtml_branch_coverage=1 00:05:14.168 --rc genhtml_function_coverage=1 00:05:14.168 --rc genhtml_legend=1 00:05:14.168 --rc geninfo_all_blocks=1 00:05:14.168 --rc geninfo_unexecuted_blocks=1 00:05:14.168 00:05:14.168 ' 00:05:14.168 09:37:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:14.168 09:37:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:14.168 09:37:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:14.168 09:37:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.168 09:37:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.168 09:37:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.168 ************************************ 00:05:14.168 START TEST skip_rpc 00:05:14.168 ************************************ 00:05:14.168 09:37:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:14.168 09:37:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59401 00:05:14.168 09:37:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.168 09:37:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:14.168 09:37:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:14.426 [2024-12-05 09:37:01.825441] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:14.426 [2024-12-05 09:37:01.826157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59401 ] 00:05:14.426 [2024-12-05 09:37:01.996689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.684 [2024-12-05 09:37:02.113992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.950 2024/12/05 09:37:06 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59401 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 59401 ']' 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 59401 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59401 00:05:19.950 killing process with pid 59401 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59401' 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 59401 00:05:19.950 09:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 59401 00:05:21.331 00:05:21.331 real 0m6.876s 00:05:21.331 user 0m6.330s 00:05:21.331 sys 0m0.436s 00:05:21.331 09:37:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.331 ************************************ 00:05:21.331 END TEST skip_rpc 00:05:21.331 ************************************ 00:05:21.331 09:37:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.331 09:37:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:21.331 09:37:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.331 09:37:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.331 09:37:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.331 ************************************ 00:05:21.331 START TEST skip_rpc_with_json 00:05:21.331 ************************************ 00:05:21.331 09:37:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:21.331 09:37:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:21.331 09:37:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59506 00:05:21.331 09:37:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.331 09:37:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59506 00:05:21.331 09:37:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59506 ']' 00:05:21.331 09:37:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.331 09:37:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.331 09:37:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.331 09:37:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.331 09:37:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.331 09:37:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.331 [2024-12-05 09:37:08.763740] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:21.331 [2024-12-05 09:37:08.764023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59506 ] 00:05:21.332 [2024-12-05 09:37:08.942519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.590 [2024-12-05 09:37:09.047122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.527 09:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.527 09:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:22.527 09:37:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:22.527 09:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.527 09:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.527 [2024-12-05 09:37:09.800474] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:22.527 2024/12/05 09:37:09 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:05:22.527 request: 00:05:22.527 { 00:05:22.527 "method": "nvmf_get_transports", 00:05:22.527 "params": { 00:05:22.527 "trtype": "tcp" 00:05:22.527 } 00:05:22.527 } 00:05:22.527 Got JSON-RPC error response 00:05:22.527 GoRPCClient: error on JSON-RPC call 00:05:22.527 09:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:22.528 09:37:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:22.528 09:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.528 09:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.528 [2024-12-05 09:37:09.812606] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.528 09:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.528 09:37:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:22.528 09:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.528 09:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.528 09:37:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.528 09:37:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:22.528 { 00:05:22.528 "subsystems": [ 00:05:22.528 { 00:05:22.528 "subsystem": "fsdev", 00:05:22.528 "config": [ 00:05:22.528 { 00:05:22.528 "method": "fsdev_set_opts", 00:05:22.528 "params": { 00:05:22.528 "fsdev_io_cache_size": 256, 00:05:22.528 "fsdev_io_pool_size": 65535 00:05:22.528 } 00:05:22.528 } 00:05:22.528 ] 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "subsystem": "vfio_user_target", 00:05:22.528 "config": null 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "subsystem": "keyring", 00:05:22.528 "config": [] 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "subsystem": "iobuf", 00:05:22.528 "config": [ 00:05:22.528 { 00:05:22.528 "method": "iobuf_set_options", 00:05:22.528 "params": { 00:05:22.528 "enable_numa": false, 00:05:22.528 "large_bufsize": 135168, 00:05:22.528 "large_pool_count": 1024, 00:05:22.528 "small_bufsize": 8192, 00:05:22.528 "small_pool_count": 8192 00:05:22.528 } 00:05:22.528 } 00:05:22.528 ] 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "subsystem": "sock", 00:05:22.528 "config": [ 00:05:22.528 { 00:05:22.528 "method": "sock_set_default_impl", 00:05:22.528 "params": { 00:05:22.528 "impl_name": "posix" 00:05:22.528 } 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "method": "sock_impl_set_options", 00:05:22.528 "params": { 00:05:22.528 "enable_ktls": false, 00:05:22.528 "enable_placement_id": 0, 00:05:22.528 "enable_quickack": false, 00:05:22.528 "enable_recv_pipe": true, 00:05:22.528 "enable_zerocopy_send_client": false, 00:05:22.528 "enable_zerocopy_send_server": true, 00:05:22.528 "impl_name": "ssl", 00:05:22.528 "recv_buf_size": 4096, 00:05:22.528 "send_buf_size": 4096, 00:05:22.528 "tls_version": 0, 00:05:22.528 "zerocopy_threshold": 0 00:05:22.528 } 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "method": "sock_impl_set_options", 00:05:22.528 "params": { 00:05:22.528 "enable_ktls": false, 00:05:22.528 "enable_placement_id": 0, 00:05:22.528 "enable_quickack": false, 00:05:22.528 "enable_recv_pipe": true, 00:05:22.528 "enable_zerocopy_send_client": false, 00:05:22.528 "enable_zerocopy_send_server": true, 00:05:22.528 "impl_name": "posix", 00:05:22.528 "recv_buf_size": 2097152, 00:05:22.528 "send_buf_size": 2097152, 00:05:22.528 "tls_version": 0, 00:05:22.528 "zerocopy_threshold": 0 00:05:22.528 } 00:05:22.528 } 00:05:22.528 ] 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "subsystem": "vmd", 00:05:22.528 "config": [] 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "subsystem": "accel", 00:05:22.528 "config": [ 00:05:22.528 { 00:05:22.528 "method": "accel_set_options", 00:05:22.528 "params": { 00:05:22.528 "buf_count": 2048, 00:05:22.528 "large_cache_size": 16, 00:05:22.528 "sequence_count": 2048, 00:05:22.528 "small_cache_size": 128, 00:05:22.528 "task_count": 2048 00:05:22.528 } 00:05:22.528 } 00:05:22.528 ] 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "subsystem": "bdev", 00:05:22.528 "config": [ 00:05:22.528 { 00:05:22.528 "method": "bdev_set_options", 00:05:22.528 "params": { 00:05:22.528 "bdev_auto_examine": true, 00:05:22.528 "bdev_io_cache_size": 256, 00:05:22.528 "bdev_io_pool_size": 65535, 00:05:22.528 "iobuf_large_cache_size": 16, 00:05:22.528 "iobuf_small_cache_size": 128 00:05:22.528 } 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "method": "bdev_raid_set_options", 00:05:22.528 "params": { 00:05:22.528 "process_max_bandwidth_mb_sec": 0, 00:05:22.528 "process_window_size_kb": 1024 00:05:22.528 } 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "method": "bdev_iscsi_set_options", 00:05:22.528 "params": { 00:05:22.528 "timeout_sec": 30 00:05:22.528 } 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "method": "bdev_nvme_set_options", 00:05:22.528 "params": { 00:05:22.528 "action_on_timeout": "none", 00:05:22.528 "allow_accel_sequence": false, 00:05:22.528 "arbitration_burst": 0, 00:05:22.528 "bdev_retry_count": 3, 00:05:22.528 "ctrlr_loss_timeout_sec": 0, 00:05:22.528 "delay_cmd_submit": true, 00:05:22.528 "dhchap_dhgroups": [ 00:05:22.528 "null", 00:05:22.528 "ffdhe2048", 00:05:22.528 "ffdhe3072", 00:05:22.528 "ffdhe4096", 00:05:22.528 "ffdhe6144", 00:05:22.528 "ffdhe8192" 00:05:22.528 ], 00:05:22.528 "dhchap_digests": [ 00:05:22.528 "sha256", 00:05:22.528 "sha384", 00:05:22.528 "sha512" 00:05:22.528 ], 00:05:22.528 "disable_auto_failback": false, 00:05:22.528 "fast_io_fail_timeout_sec": 0, 00:05:22.528 "generate_uuids": false, 00:05:22.528 "high_priority_weight": 0, 00:05:22.528 "io_path_stat": false, 00:05:22.528 "io_queue_requests": 0, 00:05:22.528 "keep_alive_timeout_ms": 10000, 00:05:22.528 "low_priority_weight": 0, 00:05:22.528 "medium_priority_weight": 0, 00:05:22.528 "nvme_adminq_poll_period_us": 10000, 00:05:22.528 "nvme_error_stat": false, 00:05:22.528 "nvme_ioq_poll_period_us": 0, 00:05:22.528 "rdma_cm_event_timeout_ms": 0, 00:05:22.528 "rdma_max_cq_size": 0, 00:05:22.528 "rdma_srq_size": 0, 00:05:22.528 "reconnect_delay_sec": 0, 00:05:22.528 "timeout_admin_us": 0, 00:05:22.528 "timeout_us": 0, 00:05:22.528 "transport_ack_timeout": 0, 00:05:22.528 "transport_retry_count": 4, 00:05:22.528 "transport_tos": 0 00:05:22.528 } 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "method": "bdev_nvme_set_hotplug", 00:05:22.528 "params": { 00:05:22.528 "enable": false, 00:05:22.528 "period_us": 100000 00:05:22.528 } 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "method": "bdev_wait_for_examine" 00:05:22.528 } 00:05:22.528 ] 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "subsystem": "scsi", 00:05:22.528 "config": null 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "subsystem": "scheduler", 00:05:22.528 "config": [ 00:05:22.528 { 00:05:22.528 "method": "framework_set_scheduler", 00:05:22.528 "params": { 00:05:22.528 "name": "static" 00:05:22.528 } 00:05:22.528 } 00:05:22.528 ] 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "subsystem": "vhost_scsi", 00:05:22.528 "config": [] 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "subsystem": "vhost_blk", 00:05:22.528 "config": [] 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "subsystem": "ublk", 00:05:22.528 "config": [] 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "subsystem": "nbd", 00:05:22.528 "config": [] 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "subsystem": "nvmf", 00:05:22.528 "config": [ 00:05:22.528 { 00:05:22.528 "method": "nvmf_set_config", 00:05:22.528 "params": { 00:05:22.528 "admin_cmd_passthru": { 00:05:22.528 "identify_ctrlr": false 00:05:22.528 }, 00:05:22.528 "dhchap_dhgroups": [ 00:05:22.528 "null", 00:05:22.528 "ffdhe2048", 00:05:22.528 "ffdhe3072", 00:05:22.528 "ffdhe4096", 00:05:22.528 "ffdhe6144", 00:05:22.528 "ffdhe8192" 00:05:22.528 ], 00:05:22.528 "dhchap_digests": [ 00:05:22.528 "sha256", 00:05:22.528 "sha384", 00:05:22.528 "sha512" 00:05:22.528 ], 00:05:22.528 "discovery_filter": "match_any" 00:05:22.528 } 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "method": "nvmf_set_max_subsystems", 00:05:22.528 "params": { 00:05:22.528 "max_subsystems": 1024 00:05:22.528 } 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "method": "nvmf_set_crdt", 00:05:22.528 "params": { 00:05:22.528 "crdt1": 0, 00:05:22.528 "crdt2": 0, 00:05:22.528 "crdt3": 0 00:05:22.528 } 00:05:22.528 }, 00:05:22.528 { 00:05:22.528 "method": "nvmf_create_transport", 00:05:22.528 "params": { 00:05:22.528 "abort_timeout_sec": 1, 00:05:22.528 "ack_timeout": 0, 00:05:22.528 "buf_cache_size": 4294967295, 00:05:22.528 "c2h_success": true, 00:05:22.528 "data_wr_pool_size": 0, 00:05:22.528 "dif_insert_or_strip": false, 00:05:22.528 "in_capsule_data_size": 4096, 00:05:22.528 "io_unit_size": 131072, 00:05:22.528 "max_aq_depth": 128, 00:05:22.528 "max_io_qpairs_per_ctrlr": 127, 00:05:22.528 "max_io_size": 131072, 00:05:22.528 "max_queue_depth": 128, 00:05:22.528 "num_shared_buffers": 511, 00:05:22.528 "sock_priority": 0, 00:05:22.529 "trtype": "TCP", 00:05:22.529 "zcopy": false 00:05:22.529 } 00:05:22.529 } 00:05:22.529 ] 00:05:22.529 }, 00:05:22.529 { 00:05:22.529 "subsystem": "iscsi", 00:05:22.529 "config": [ 00:05:22.529 { 00:05:22.529 "method": "iscsi_set_options", 00:05:22.529 "params": { 00:05:22.529 "allow_duplicated_isid": false, 00:05:22.529 "chap_group": 0, 00:05:22.529 "data_out_pool_size": 2048, 00:05:22.529 "default_time2retain": 20, 00:05:22.529 "default_time2wait": 2, 00:05:22.529 "disable_chap": false, 00:05:22.529 "error_recovery_level": 0, 00:05:22.529 "first_burst_length": 8192, 00:05:22.529 "immediate_data": true, 00:05:22.529 "immediate_data_pool_size": 16384, 00:05:22.529 "max_connections_per_session": 2, 00:05:22.529 "max_large_datain_per_connection": 64, 00:05:22.529 "max_queue_depth": 64, 00:05:22.529 "max_r2t_per_connection": 4, 00:05:22.529 "max_sessions": 128, 00:05:22.529 "mutual_chap": false, 00:05:22.529 "node_base": "iqn.2016-06.io.spdk", 00:05:22.529 "nop_in_interval": 30, 00:05:22.529 "nop_timeout": 60, 00:05:22.529 "pdu_pool_size": 36864, 00:05:22.529 "require_chap": false 00:05:22.529 } 00:05:22.529 } 00:05:22.529 ] 00:05:22.529 } 00:05:22.529 ] 00:05:22.529 } 00:05:22.529 09:37:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:22.529 09:37:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59506 00:05:22.529 09:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59506 ']' 00:05:22.529 09:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59506 00:05:22.529 09:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:22.529 09:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.529 09:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59506 00:05:22.529 09:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.529 09:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.529 killing process with pid 59506 00:05:22.529 09:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59506' 00:05:22.529 09:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59506 00:05:22.529 09:37:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59506 00:05:24.434 09:37:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59568 00:05:24.434 09:37:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:24.434 09:37:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:29.705 09:37:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59568 00:05:29.705 09:37:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59568 ']' 00:05:29.705 09:37:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59568 00:05:29.705 09:37:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:29.705 09:37:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.705 09:37:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59568 00:05:29.705 09:37:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.705 09:37:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.705 killing process with pid 59568 00:05:29.705 09:37:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59568' 00:05:29.705 09:37:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59568 00:05:29.705 09:37:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59568 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:31.609 00:05:31.609 real 0m10.101s 00:05:31.609 user 0m9.514s 00:05:31.609 sys 0m0.984s 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:31.609 ************************************ 00:05:31.609 END TEST skip_rpc_with_json 00:05:31.609 ************************************ 00:05:31.609 09:37:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:31.609 09:37:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.609 09:37:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.609 09:37:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.609 ************************************ 00:05:31.609 START TEST skip_rpc_with_delay 00:05:31.609 ************************************ 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.609 [2024-12-05 09:37:18.907779] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:31.609 00:05:31.609 real 0m0.201s 00:05:31.609 user 0m0.104s 00:05:31.609 sys 0m0.096s 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.609 09:37:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:31.609 ************************************ 00:05:31.609 END TEST skip_rpc_with_delay 00:05:31.609 ************************************ 00:05:31.609 09:37:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:31.609 09:37:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:31.609 09:37:19 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:31.609 09:37:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.609 09:37:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.609 09:37:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.609 ************************************ 00:05:31.609 START TEST exit_on_failed_rpc_init 00:05:31.609 ************************************ 00:05:31.609 09:37:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:31.609 09:37:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59691 00:05:31.609 09:37:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59691 00:05:31.609 09:37:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.609 09:37:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59691 ']' 00:05:31.609 09:37:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.609 09:37:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.609 09:37:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.609 09:37:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.609 09:37:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.609 [2024-12-05 09:37:19.167173] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:31.609 [2024-12-05 09:37:19.167354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59691 ] 00:05:31.868 [2024-12-05 09:37:19.344984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.868 [2024-12-05 09:37:19.446391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.805 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.805 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:32.805 09:37:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.805 09:37:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.805 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:32.805 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.805 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.805 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.805 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.805 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.805 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.805 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.805 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.805 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:32.805 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.805 [2024-12-05 09:37:20.395157] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:32.805 [2024-12-05 09:37:20.395365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59726 ] 00:05:33.064 [2024-12-05 09:37:20.575670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.335 [2024-12-05 09:37:20.724369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.336 [2024-12-05 09:37:20.724532] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:33.336 [2024-12-05 09:37:20.724560] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:33.336 [2024-12-05 09:37:20.724584] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.654 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:33.654 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:33.654 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:33.654 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:33.654 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:33.654 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:33.654 09:37:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:33.654 09:37:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59691 00:05:33.654 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59691 ']' 00:05:33.654 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59691 00:05:33.654 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:33.654 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.654 09:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59691 00:05:33.654 09:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.654 09:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.654 killing process with pid 59691 00:05:33.654 09:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59691' 00:05:33.654 09:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59691 00:05:33.654 09:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59691 00:05:35.567 00:05:35.567 real 0m3.848s 00:05:35.567 user 0m4.264s 00:05:35.567 sys 0m0.699s 00:05:35.567 09:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.567 09:37:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:35.568 ************************************ 00:05:35.568 END TEST exit_on_failed_rpc_init 00:05:35.568 ************************************ 00:05:35.568 09:37:22 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:35.568 00:05:35.568 real 0m21.440s 00:05:35.568 user 0m20.397s 00:05:35.568 sys 0m2.428s 00:05:35.568 09:37:22 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.568 09:37:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.568 ************************************ 00:05:35.568 END TEST skip_rpc 00:05:35.568 ************************************ 00:05:35.568 09:37:22 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:35.568 09:37:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.568 09:37:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.568 09:37:22 -- common/autotest_common.sh@10 -- # set +x 00:05:35.568 ************************************ 00:05:35.568 START TEST rpc_client 00:05:35.568 ************************************ 00:05:35.568 09:37:22 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:35.568 * Looking for test storage... 00:05:35.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:35.568 09:37:23 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:35.568 09:37:23 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:35.568 09:37:23 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:35.568 09:37:23 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.568 09:37:23 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:35.568 09:37:23 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.568 09:37:23 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:35.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.568 --rc genhtml_branch_coverage=1 00:05:35.568 --rc genhtml_function_coverage=1 00:05:35.568 --rc genhtml_legend=1 00:05:35.568 --rc geninfo_all_blocks=1 00:05:35.568 --rc geninfo_unexecuted_blocks=1 00:05:35.568 00:05:35.568 ' 00:05:35.568 09:37:23 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:35.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.568 --rc genhtml_branch_coverage=1 00:05:35.568 --rc genhtml_function_coverage=1 00:05:35.568 --rc genhtml_legend=1 00:05:35.568 --rc geninfo_all_blocks=1 00:05:35.568 --rc geninfo_unexecuted_blocks=1 00:05:35.568 00:05:35.568 ' 00:05:35.568 09:37:23 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:35.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.568 --rc genhtml_branch_coverage=1 00:05:35.568 --rc genhtml_function_coverage=1 00:05:35.568 --rc genhtml_legend=1 00:05:35.568 --rc geninfo_all_blocks=1 00:05:35.568 --rc geninfo_unexecuted_blocks=1 00:05:35.568 00:05:35.568 ' 00:05:35.568 09:37:23 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:35.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.568 --rc genhtml_branch_coverage=1 00:05:35.568 --rc genhtml_function_coverage=1 00:05:35.568 --rc genhtml_legend=1 00:05:35.568 --rc geninfo_all_blocks=1 00:05:35.568 --rc geninfo_unexecuted_blocks=1 00:05:35.568 00:05:35.568 ' 00:05:35.568 09:37:23 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:35.828 OK 00:05:35.828 09:37:23 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:35.828 00:05:35.828 real 0m0.251s 00:05:35.828 user 0m0.156s 00:05:35.828 sys 0m0.106s 00:05:35.828 09:37:23 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.828 09:37:23 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:35.828 ************************************ 00:05:35.828 END TEST rpc_client 00:05:35.828 ************************************ 00:05:35.828 09:37:23 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:35.828 09:37:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.828 09:37:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.828 09:37:23 -- common/autotest_common.sh@10 -- # set +x 00:05:35.828 ************************************ 00:05:35.828 START TEST json_config 00:05:35.828 ************************************ 00:05:35.828 09:37:23 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:35.828 09:37:23 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:35.828 09:37:23 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:35.828 09:37:23 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:35.828 09:37:23 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:35.828 09:37:23 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.828 09:37:23 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.828 09:37:23 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.828 09:37:23 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.828 09:37:23 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.828 09:37:23 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.828 09:37:23 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.828 09:37:23 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.828 09:37:23 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.828 09:37:23 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.828 09:37:23 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.828 09:37:23 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:35.828 09:37:23 json_config -- scripts/common.sh@345 -- # : 1 00:05:35.828 09:37:23 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.828 09:37:23 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.828 09:37:23 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:35.828 09:37:23 json_config -- scripts/common.sh@353 -- # local d=1 00:05:35.828 09:37:23 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.828 09:37:23 json_config -- scripts/common.sh@355 -- # echo 1 00:05:35.828 09:37:23 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.828 09:37:23 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:35.828 09:37:23 json_config -- scripts/common.sh@353 -- # local d=2 00:05:35.828 09:37:23 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.828 09:37:23 json_config -- scripts/common.sh@355 -- # echo 2 00:05:35.828 09:37:23 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.828 09:37:23 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.828 09:37:23 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.828 09:37:23 json_config -- scripts/common.sh@368 -- # return 0 00:05:35.828 09:37:23 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.828 09:37:23 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:35.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.828 --rc genhtml_branch_coverage=1 00:05:35.828 --rc genhtml_function_coverage=1 00:05:35.828 --rc genhtml_legend=1 00:05:35.828 --rc geninfo_all_blocks=1 00:05:35.828 --rc geninfo_unexecuted_blocks=1 00:05:35.828 00:05:35.828 ' 00:05:35.828 09:37:23 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:35.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.828 --rc genhtml_branch_coverage=1 00:05:35.828 --rc genhtml_function_coverage=1 00:05:35.828 --rc genhtml_legend=1 00:05:35.828 --rc geninfo_all_blocks=1 00:05:35.828 --rc geninfo_unexecuted_blocks=1 00:05:35.828 00:05:35.828 ' 00:05:35.828 09:37:23 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:35.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.828 --rc genhtml_branch_coverage=1 00:05:35.828 --rc genhtml_function_coverage=1 00:05:35.828 --rc genhtml_legend=1 00:05:35.828 --rc geninfo_all_blocks=1 00:05:35.828 --rc geninfo_unexecuted_blocks=1 00:05:35.828 00:05:35.828 ' 00:05:35.828 09:37:23 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:35.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.828 --rc genhtml_branch_coverage=1 00:05:35.828 --rc genhtml_function_coverage=1 00:05:35.828 --rc genhtml_legend=1 00:05:35.828 --rc geninfo_all_blocks=1 00:05:35.828 --rc geninfo_unexecuted_blocks=1 00:05:35.828 00:05:35.828 ' 00:05:35.828 09:37:23 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:35.828 09:37:23 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:35.828 09:37:23 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.828 09:37:23 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.828 09:37:23 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.828 09:37:23 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.828 09:37:23 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.828 09:37:23 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.828 09:37:23 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.828 09:37:23 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.828 09:37:23 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.828 09:37:23 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.828 09:37:23 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:05:35.828 09:37:23 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:05:35.828 09:37:23 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.829 09:37:23 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.829 09:37:23 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.829 09:37:23 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.829 09:37:23 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:35.829 09:37:23 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:35.829 09:37:23 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.829 09:37:23 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.829 09:37:23 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.829 09:37:23 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.829 09:37:23 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.829 09:37:23 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.829 09:37:23 json_config -- paths/export.sh@5 -- # export PATH 00:05:35.829 09:37:23 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.829 09:37:23 json_config -- nvmf/common.sh@51 -- # : 0 00:05:35.829 09:37:23 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:35.829 09:37:23 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:35.829 09:37:23 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.829 09:37:23 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.829 09:37:23 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.829 09:37:23 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:35.829 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:35.829 09:37:23 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:35.829 09:37:23 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:35.829 09:37:23 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:36.088 09:37:23 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:36.088 09:37:23 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:36.088 09:37:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:36.088 09:37:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:36.088 09:37:23 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:36.088 09:37:23 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:36.088 09:37:23 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:36.088 09:37:23 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:36.088 09:37:23 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:36.088 09:37:23 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:36.088 09:37:23 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:36.088 09:37:23 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:36.088 09:37:23 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:36.088 09:37:23 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:36.088 09:37:23 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:36.088 INFO: JSON configuration test init 00:05:36.088 09:37:23 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:36.088 09:37:23 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:36.088 09:37:23 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:36.088 09:37:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.088 09:37:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.089 09:37:23 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:36.089 09:37:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.089 09:37:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.089 09:37:23 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:36.089 09:37:23 json_config -- json_config/common.sh@9 -- # local app=target 00:05:36.089 09:37:23 json_config -- json_config/common.sh@10 -- # shift 00:05:36.089 09:37:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.089 09:37:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.089 09:37:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.089 09:37:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.089 09:37:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.089 09:37:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59885 00:05:36.089 Waiting for target to run... 00:05:36.089 09:37:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:36.089 09:37:23 json_config -- json_config/common.sh@25 -- # waitforlisten 59885 /var/tmp/spdk_tgt.sock 00:05:36.089 09:37:23 json_config -- common/autotest_common.sh@835 -- # '[' -z 59885 ']' 00:05:36.089 09:37:23 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.089 09:37:23 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.089 09:37:23 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.089 09:37:23 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.089 09:37:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.089 09:37:23 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:36.089 [2024-12-05 09:37:23.590518] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:36.089 [2024-12-05 09:37:23.590752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59885 ] 00:05:36.657 [2024-12-05 09:37:24.072707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.657 [2024-12-05 09:37:24.205728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.225 09:37:24 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.225 09:37:24 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:37.225 00:05:37.225 09:37:24 json_config -- json_config/common.sh@26 -- # echo '' 00:05:37.225 09:37:24 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:37.225 09:37:24 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:37.225 09:37:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:37.225 09:37:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.225 09:37:24 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:37.225 09:37:24 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:37.225 09:37:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:37.225 09:37:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.225 09:37:24 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:37.225 09:37:24 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:37.225 09:37:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:38.159 09:37:25 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:38.159 09:37:25 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:38.159 09:37:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.159 09:37:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.159 09:37:25 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:38.159 09:37:25 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:38.159 09:37:25 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:38.159 09:37:25 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:38.159 09:37:25 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:38.159 09:37:25 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:38.159 09:37:25 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:38.159 09:37:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@54 -- # sort 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:38.417 09:37:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:38.417 09:37:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:38.417 09:37:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.417 09:37:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:38.417 09:37:25 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:38.417 09:37:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:38.675 MallocForNvmf0 00:05:38.675 09:37:26 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:38.675 09:37:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:38.933 MallocForNvmf1 00:05:38.933 09:37:26 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:38.933 09:37:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:39.192 [2024-12-05 09:37:26.732508] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:39.192 09:37:26 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:39.192 09:37:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:39.450 09:37:26 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:39.450 09:37:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:39.709 09:37:27 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:39.709 09:37:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:39.968 09:37:27 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:39.968 09:37:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:40.227 [2024-12-05 09:37:27.741205] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:40.227 09:37:27 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:40.227 09:37:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:40.227 09:37:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.227 09:37:27 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:40.227 09:37:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:40.227 09:37:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.227 09:37:27 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:40.227 09:37:27 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:40.227 09:37:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:40.794 MallocBdevForConfigChangeCheck 00:05:40.794 09:37:28 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:40.794 09:37:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:40.794 09:37:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.794 09:37:28 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:40.794 09:37:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.054 INFO: shutting down applications... 00:05:41.054 09:37:28 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:41.054 09:37:28 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:41.054 09:37:28 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:41.054 09:37:28 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:41.054 09:37:28 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:41.313 Calling clear_iscsi_subsystem 00:05:41.313 Calling clear_nvmf_subsystem 00:05:41.313 Calling clear_nbd_subsystem 00:05:41.313 Calling clear_ublk_subsystem 00:05:41.313 Calling clear_vhost_blk_subsystem 00:05:41.313 Calling clear_vhost_scsi_subsystem 00:05:41.313 Calling clear_bdev_subsystem 00:05:41.313 09:37:28 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:41.313 09:37:28 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:41.313 09:37:28 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:41.313 09:37:28 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.313 09:37:28 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:41.313 09:37:28 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:41.879 09:37:29 json_config -- json_config/json_config.sh@352 -- # break 00:05:41.879 09:37:29 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:41.879 09:37:29 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:41.879 09:37:29 json_config -- json_config/common.sh@31 -- # local app=target 00:05:41.879 09:37:29 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:41.879 09:37:29 json_config -- json_config/common.sh@35 -- # [[ -n 59885 ]] 00:05:41.879 09:37:29 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59885 00:05:41.879 09:37:29 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:41.879 09:37:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.879 09:37:29 json_config -- json_config/common.sh@41 -- # kill -0 59885 00:05:41.879 09:37:29 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:42.445 09:37:29 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:42.445 09:37:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.445 09:37:29 json_config -- json_config/common.sh@41 -- # kill -0 59885 00:05:42.445 09:37:29 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:43.013 09:37:30 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:43.013 09:37:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:43.013 09:37:30 json_config -- json_config/common.sh@41 -- # kill -0 59885 00:05:43.013 09:37:30 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:43.013 09:37:30 json_config -- json_config/common.sh@43 -- # break 00:05:43.013 09:37:30 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:43.013 SPDK target shutdown done 00:05:43.013 09:37:30 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:43.013 INFO: relaunching applications... 00:05:43.013 09:37:30 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:43.013 09:37:30 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:43.013 09:37:30 json_config -- json_config/common.sh@9 -- # local app=target 00:05:43.013 09:37:30 json_config -- json_config/common.sh@10 -- # shift 00:05:43.013 09:37:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:43.013 09:37:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:43.013 09:37:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:43.013 09:37:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.013 09:37:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.013 09:37:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60178 00:05:43.013 Waiting for target to run... 00:05:43.013 09:37:30 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:43.013 09:37:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:43.013 09:37:30 json_config -- json_config/common.sh@25 -- # waitforlisten 60178 /var/tmp/spdk_tgt.sock 00:05:43.013 09:37:30 json_config -- common/autotest_common.sh@835 -- # '[' -z 60178 ']' 00:05:43.013 09:37:30 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.013 09:37:30 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.013 09:37:30 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.013 09:37:30 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.013 09:37:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.013 [2024-12-05 09:37:30.469147] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:43.013 [2024-12-05 09:37:30.469342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60178 ] 00:05:43.579 [2024-12-05 09:37:30.913795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.579 [2024-12-05 09:37:31.001534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.513 [2024-12-05 09:37:31.824808] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:44.513 [2024-12-05 09:37:31.856978] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:44.513 09:37:31 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.513 09:37:31 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:44.513 00:05:44.513 09:37:31 json_config -- json_config/common.sh@26 -- # echo '' 00:05:44.513 09:37:31 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:44.513 INFO: Checking if target configuration is the same... 00:05:44.513 09:37:31 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:44.513 09:37:31 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:44.513 09:37:31 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:44.513 09:37:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:44.513 + '[' 2 -ne 2 ']' 00:05:44.513 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:44.514 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:44.514 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:44.514 +++ basename /dev/fd/62 00:05:44.514 ++ mktemp /tmp/62.XXX 00:05:44.514 + tmp_file_1=/tmp/62.qSw 00:05:44.514 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:44.514 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:44.514 + tmp_file_2=/tmp/spdk_tgt_config.json.Gyc 00:05:44.514 + ret=0 00:05:44.514 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:44.771 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:45.029 + diff -u /tmp/62.qSw /tmp/spdk_tgt_config.json.Gyc 00:05:45.029 INFO: JSON config files are the same 00:05:45.029 + echo 'INFO: JSON config files are the same' 00:05:45.029 + rm /tmp/62.qSw /tmp/spdk_tgt_config.json.Gyc 00:05:45.029 + exit 0 00:05:45.029 09:37:32 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:45.029 INFO: changing configuration and checking if this can be detected... 00:05:45.029 09:37:32 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:45.029 09:37:32 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:45.029 09:37:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:45.288 09:37:32 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:45.288 09:37:32 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:45.288 09:37:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.288 + '[' 2 -ne 2 ']' 00:05:45.288 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:45.288 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:45.289 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:45.289 +++ basename /dev/fd/62 00:05:45.289 ++ mktemp /tmp/62.XXX 00:05:45.289 + tmp_file_1=/tmp/62.PmI 00:05:45.289 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:45.289 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:45.289 + tmp_file_2=/tmp/spdk_tgt_config.json.jk6 00:05:45.289 + ret=0 00:05:45.289 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:45.548 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:45.807 + diff -u /tmp/62.PmI /tmp/spdk_tgt_config.json.jk6 00:05:45.807 + ret=1 00:05:45.807 + echo '=== Start of file: /tmp/62.PmI ===' 00:05:45.808 + cat /tmp/62.PmI 00:05:45.808 + echo '=== End of file: /tmp/62.PmI ===' 00:05:45.808 + echo '' 00:05:45.808 + echo '=== Start of file: /tmp/spdk_tgt_config.json.jk6 ===' 00:05:45.808 + cat /tmp/spdk_tgt_config.json.jk6 00:05:45.808 + echo '=== End of file: /tmp/spdk_tgt_config.json.jk6 ===' 00:05:45.808 + echo '' 00:05:45.808 + rm /tmp/62.PmI /tmp/spdk_tgt_config.json.jk6 00:05:45.808 + exit 1 00:05:45.808 INFO: configuration change detected. 00:05:45.808 09:37:33 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:45.808 09:37:33 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:45.808 09:37:33 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:45.808 09:37:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:45.808 09:37:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.808 09:37:33 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:45.808 09:37:33 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:45.808 09:37:33 json_config -- json_config/json_config.sh@324 -- # [[ -n 60178 ]] 00:05:45.808 09:37:33 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:45.808 09:37:33 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:45.808 09:37:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:45.808 09:37:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.808 09:37:33 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:45.808 09:37:33 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:45.808 09:37:33 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:45.808 09:37:33 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:45.808 09:37:33 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:45.808 09:37:33 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:45.808 09:37:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:45.808 09:37:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.808 09:37:33 json_config -- json_config/json_config.sh@330 -- # killprocess 60178 00:05:45.808 09:37:33 json_config -- common/autotest_common.sh@954 -- # '[' -z 60178 ']' 00:05:45.808 09:37:33 json_config -- common/autotest_common.sh@958 -- # kill -0 60178 00:05:45.808 09:37:33 json_config -- common/autotest_common.sh@959 -- # uname 00:05:45.808 09:37:33 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.808 09:37:33 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60178 00:05:45.808 09:37:33 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.808 09:37:33 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.808 killing process with pid 60178 00:05:45.808 09:37:33 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60178' 00:05:45.808 09:37:33 json_config -- common/autotest_common.sh@973 -- # kill 60178 00:05:45.808 09:37:33 json_config -- common/autotest_common.sh@978 -- # wait 60178 00:05:46.745 09:37:34 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:46.745 09:37:34 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:46.745 09:37:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:46.745 09:37:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.745 09:37:34 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:46.745 INFO: Success 00:05:46.745 09:37:34 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:46.745 00:05:46.745 real 0m10.875s 00:05:46.745 user 0m14.403s 00:05:46.745 sys 0m2.209s 00:05:46.745 09:37:34 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.745 09:37:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.745 ************************************ 00:05:46.745 END TEST json_config 00:05:46.745 ************************************ 00:05:46.745 09:37:34 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:46.745 09:37:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.745 09:37:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.745 09:37:34 -- common/autotest_common.sh@10 -- # set +x 00:05:46.745 ************************************ 00:05:46.745 START TEST json_config_extra_key 00:05:46.745 ************************************ 00:05:46.745 09:37:34 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:46.745 09:37:34 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:46.745 09:37:34 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:46.745 09:37:34 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:46.745 09:37:34 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:46.745 09:37:34 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.745 09:37:34 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.745 --rc genhtml_branch_coverage=1 00:05:46.745 --rc genhtml_function_coverage=1 00:05:46.745 --rc genhtml_legend=1 00:05:46.745 --rc geninfo_all_blocks=1 00:05:46.745 --rc geninfo_unexecuted_blocks=1 00:05:46.745 00:05:46.745 ' 00:05:46.745 09:37:34 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.745 --rc genhtml_branch_coverage=1 00:05:46.745 --rc genhtml_function_coverage=1 00:05:46.745 --rc genhtml_legend=1 00:05:46.745 --rc geninfo_all_blocks=1 00:05:46.745 --rc geninfo_unexecuted_blocks=1 00:05:46.745 00:05:46.745 ' 00:05:46.745 09:37:34 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.745 --rc genhtml_branch_coverage=1 00:05:46.745 --rc genhtml_function_coverage=1 00:05:46.745 --rc genhtml_legend=1 00:05:46.745 --rc geninfo_all_blocks=1 00:05:46.745 --rc geninfo_unexecuted_blocks=1 00:05:46.745 00:05:46.745 ' 00:05:46.745 09:37:34 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.745 --rc genhtml_branch_coverage=1 00:05:46.745 --rc genhtml_function_coverage=1 00:05:46.745 --rc genhtml_legend=1 00:05:46.745 --rc geninfo_all_blocks=1 00:05:46.745 --rc geninfo_unexecuted_blocks=1 00:05:46.745 00:05:46.745 ' 00:05:46.745 09:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:46.745 09:37:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:46.745 09:37:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.745 09:37:34 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.745 09:37:34 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.745 09:37:34 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.745 09:37:34 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.745 09:37:34 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.745 09:37:34 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.745 09:37:34 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.745 09:37:34 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.745 09:37:34 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.745 09:37:34 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:05:46.745 09:37:34 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:05:46.745 09:37:34 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.745 09:37:34 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.745 09:37:34 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.745 09:37:34 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.745 09:37:34 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.745 09:37:34 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.745 09:37:34 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.745 09:37:34 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.746 09:37:34 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.746 09:37:34 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:46.746 09:37:34 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.746 09:37:34 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:46.746 09:37:34 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:46.746 09:37:34 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:46.746 09:37:34 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.746 09:37:34 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.746 09:37:34 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.746 09:37:34 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:46.746 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:46.746 09:37:34 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:46.746 09:37:34 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:46.746 09:37:34 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:46.746 09:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:46.746 09:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:46.746 09:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:46.746 09:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:46.746 09:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:46.746 09:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:46.746 09:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:46.746 09:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:46.746 09:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:46.746 09:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:46.746 INFO: launching applications... 00:05:46.746 09:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:46.746 09:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:46.746 09:37:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:46.746 09:37:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:46.746 09:37:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.746 09:37:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.746 09:37:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.746 09:37:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.746 09:37:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.746 09:37:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=60374 00:05:46.746 09:37:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.746 Waiting for target to run... 00:05:46.746 09:37:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 60374 /var/tmp/spdk_tgt.sock 00:05:46.746 09:37:34 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 60374 ']' 00:05:46.746 09:37:34 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:46.746 09:37:34 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.746 09:37:34 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.746 09:37:34 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.746 09:37:34 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.746 09:37:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:47.005 [2024-12-05 09:37:34.494339] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:47.005 [2024-12-05 09:37:34.494551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60374 ] 00:05:47.576 [2024-12-05 09:37:34.955775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.576 [2024-12-05 09:37:35.044729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.182 09:37:35 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.182 00:05:48.182 09:37:35 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:48.182 09:37:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:48.182 INFO: shutting down applications... 00:05:48.182 09:37:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:48.182 09:37:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:48.182 09:37:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:48.182 09:37:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:48.182 09:37:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 60374 ]] 00:05:48.182 09:37:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 60374 00:05:48.182 09:37:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:48.182 09:37:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.182 09:37:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60374 00:05:48.182 09:37:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:48.749 09:37:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:48.749 09:37:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.749 09:37:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60374 00:05:48.749 09:37:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:49.008 09:37:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:49.008 09:37:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:49.008 09:37:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60374 00:05:49.008 09:37:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:49.576 09:37:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:49.576 09:37:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:49.576 09:37:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60374 00:05:49.576 09:37:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:50.143 09:37:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:50.144 09:37:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.144 09:37:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60374 00:05:50.144 09:37:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:50.709 09:37:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:50.709 09:37:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.709 09:37:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60374 00:05:50.709 09:37:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:50.709 09:37:38 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:50.709 09:37:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:50.709 SPDK target shutdown done 00:05:50.709 09:37:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:50.709 Success 00:05:50.709 09:37:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:50.709 00:05:50.709 real 0m3.925s 00:05:50.709 user 0m3.220s 00:05:50.709 sys 0m0.626s 00:05:50.709 09:37:38 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.709 ************************************ 00:05:50.709 END TEST json_config_extra_key 00:05:50.709 09:37:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:50.709 ************************************ 00:05:50.709 09:37:38 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.709 09:37:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.709 09:37:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.709 09:37:38 -- common/autotest_common.sh@10 -- # set +x 00:05:50.709 ************************************ 00:05:50.709 START TEST alias_rpc 00:05:50.709 ************************************ 00:05:50.709 09:37:38 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.709 * Looking for test storage... 00:05:50.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:50.709 09:37:38 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:50.709 09:37:38 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:50.709 09:37:38 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:50.967 09:37:38 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.967 09:37:38 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:50.967 09:37:38 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.967 09:37:38 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:50.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.967 --rc genhtml_branch_coverage=1 00:05:50.967 --rc genhtml_function_coverage=1 00:05:50.967 --rc genhtml_legend=1 00:05:50.967 --rc geninfo_all_blocks=1 00:05:50.967 --rc geninfo_unexecuted_blocks=1 00:05:50.967 00:05:50.967 ' 00:05:50.967 09:37:38 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:50.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.967 --rc genhtml_branch_coverage=1 00:05:50.967 --rc genhtml_function_coverage=1 00:05:50.967 --rc genhtml_legend=1 00:05:50.967 --rc geninfo_all_blocks=1 00:05:50.967 --rc geninfo_unexecuted_blocks=1 00:05:50.967 00:05:50.967 ' 00:05:50.967 09:37:38 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:50.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.967 --rc genhtml_branch_coverage=1 00:05:50.967 --rc genhtml_function_coverage=1 00:05:50.967 --rc genhtml_legend=1 00:05:50.967 --rc geninfo_all_blocks=1 00:05:50.967 --rc geninfo_unexecuted_blocks=1 00:05:50.967 00:05:50.967 ' 00:05:50.967 09:37:38 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:50.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.967 --rc genhtml_branch_coverage=1 00:05:50.967 --rc genhtml_function_coverage=1 00:05:50.967 --rc genhtml_legend=1 00:05:50.967 --rc geninfo_all_blocks=1 00:05:50.968 --rc geninfo_unexecuted_blocks=1 00:05:50.968 00:05:50.968 ' 00:05:50.968 09:37:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:50.968 09:37:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=60491 00:05:50.968 09:37:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 60491 00:05:50.968 09:37:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:50.968 09:37:38 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 60491 ']' 00:05:50.968 09:37:38 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.968 09:37:38 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.968 09:37:38 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.968 09:37:38 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.968 09:37:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.968 [2024-12-05 09:37:38.465231] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:50.968 [2024-12-05 09:37:38.465427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60491 ] 00:05:51.226 [2024-12-05 09:37:38.632282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.226 [2024-12-05 09:37:38.735752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.162 09:37:39 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.162 09:37:39 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:52.162 09:37:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:52.162 09:37:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 60491 00:05:52.162 09:37:39 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 60491 ']' 00:05:52.162 09:37:39 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 60491 00:05:52.162 09:37:39 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:52.162 09:37:39 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.162 09:37:39 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60491 00:05:52.162 09:37:39 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.162 09:37:39 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.162 killing process with pid 60491 00:05:52.162 09:37:39 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60491' 00:05:52.162 09:37:39 alias_rpc -- common/autotest_common.sh@973 -- # kill 60491 00:05:52.162 09:37:39 alias_rpc -- common/autotest_common.sh@978 -- # wait 60491 00:05:54.068 00:05:54.068 real 0m3.378s 00:05:54.068 user 0m3.514s 00:05:54.068 sys 0m0.595s 00:05:54.068 09:37:41 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.068 09:37:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.068 ************************************ 00:05:54.068 END TEST alias_rpc 00:05:54.068 ************************************ 00:05:54.068 09:37:41 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:05:54.068 09:37:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:54.068 09:37:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.068 09:37:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.068 09:37:41 -- common/autotest_common.sh@10 -- # set +x 00:05:54.068 ************************************ 00:05:54.068 START TEST dpdk_mem_utility 00:05:54.068 ************************************ 00:05:54.068 09:37:41 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:54.068 * Looking for test storage... 00:05:54.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:54.068 09:37:41 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:54.327 09:37:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:54.327 09:37:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:54.327 09:37:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.327 09:37:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:54.327 09:37:41 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.327 09:37:41 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:54.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.327 --rc genhtml_branch_coverage=1 00:05:54.327 --rc genhtml_function_coverage=1 00:05:54.327 --rc genhtml_legend=1 00:05:54.327 --rc geninfo_all_blocks=1 00:05:54.327 --rc geninfo_unexecuted_blocks=1 00:05:54.327 00:05:54.327 ' 00:05:54.327 09:37:41 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:54.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.327 --rc genhtml_branch_coverage=1 00:05:54.327 --rc genhtml_function_coverage=1 00:05:54.327 --rc genhtml_legend=1 00:05:54.327 --rc geninfo_all_blocks=1 00:05:54.327 --rc geninfo_unexecuted_blocks=1 00:05:54.327 00:05:54.327 ' 00:05:54.327 09:37:41 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:54.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.327 --rc genhtml_branch_coverage=1 00:05:54.327 --rc genhtml_function_coverage=1 00:05:54.327 --rc genhtml_legend=1 00:05:54.327 --rc geninfo_all_blocks=1 00:05:54.327 --rc geninfo_unexecuted_blocks=1 00:05:54.327 00:05:54.327 ' 00:05:54.327 09:37:41 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:54.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.327 --rc genhtml_branch_coverage=1 00:05:54.327 --rc genhtml_function_coverage=1 00:05:54.327 --rc genhtml_legend=1 00:05:54.327 --rc geninfo_all_blocks=1 00:05:54.327 --rc geninfo_unexecuted_blocks=1 00:05:54.327 00:05:54.327 ' 00:05:54.327 09:37:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:54.327 09:37:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60603 00:05:54.327 09:37:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60603 00:05:54.327 09:37:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 60603 ']' 00:05:54.327 09:37:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.327 09:37:41 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.327 09:37:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.327 09:37:41 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.327 09:37:41 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.327 09:37:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:54.327 [2024-12-05 09:37:41.935247] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:54.327 [2024-12-05 09:37:41.935427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60603 ] 00:05:54.585 [2024-12-05 09:37:42.117660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.844 [2024-12-05 09:37:42.216585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.412 09:37:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.412 09:37:42 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:55.412 09:37:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:55.412 09:37:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:55.412 09:37:42 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.412 09:37:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:55.412 { 00:05:55.412 "filename": "/tmp/spdk_mem_dump.txt" 00:05:55.412 } 00:05:55.412 09:37:42 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.412 09:37:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:55.412 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:55.412 1 heaps totaling size 824.000000 MiB 00:05:55.412 size: 824.000000 MiB heap id: 0 00:05:55.412 end heaps---------- 00:05:55.412 9 mempools totaling size 603.782043 MiB 00:05:55.412 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:55.412 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:55.412 size: 100.555481 MiB name: bdev_io_60603 00:05:55.412 size: 50.003479 MiB name: msgpool_60603 00:05:55.412 size: 36.509338 MiB name: fsdev_io_60603 00:05:55.412 size: 21.763794 MiB name: PDU_Pool 00:05:55.412 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:55.412 size: 4.133484 MiB name: evtpool_60603 00:05:55.412 size: 0.026123 MiB name: Session_Pool 00:05:55.412 end mempools------- 00:05:55.412 6 memzones totaling size 4.142822 MiB 00:05:55.412 size: 1.000366 MiB name: RG_ring_0_60603 00:05:55.412 size: 1.000366 MiB name: RG_ring_1_60603 00:05:55.412 size: 1.000366 MiB name: RG_ring_4_60603 00:05:55.412 size: 1.000366 MiB name: RG_ring_5_60603 00:05:55.412 size: 0.125366 MiB name: RG_ring_2_60603 00:05:55.412 size: 0.015991 MiB name: RG_ring_3_60603 00:05:55.412 end memzones------- 00:05:55.412 09:37:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:55.673 heap id: 0 total size: 824.000000 MiB number of busy elements: 236 number of free elements: 18 00:05:55.673 list of free elements. size: 16.800659 MiB 00:05:55.673 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:55.673 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:55.673 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:55.673 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:55.673 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:55.673 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:55.673 element at address: 0x200000400000 with size: 0.994934 MiB 00:05:55.673 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:55.673 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:55.673 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:55.673 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:55.673 element at address: 0x20001b400000 with size: 0.568542 MiB 00:05:55.673 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:55.673 element at address: 0x200019600000 with size: 0.489441 MiB 00:05:55.673 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:55.673 element at address: 0x200012c00000 with size: 0.433716 MiB 00:05:55.673 element at address: 0x200028800000 with size: 0.390930 MiB 00:05:55.673 element at address: 0x200000800000 with size: 0.353088 MiB 00:05:55.673 list of standard malloc elements. size: 199.268433 MiB 00:05:55.673 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:55.673 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:55.673 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:55.673 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:55.673 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:55.673 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:55.673 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:55.673 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:55.673 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:55.673 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:55.673 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:55.673 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:55.673 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:55.673 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:55.673 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:55.673 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:55.673 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:55.673 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:55.673 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:55.673 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:55.673 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:55.673 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:55.673 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:55.673 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:55.673 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:55.673 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:55.673 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:55.673 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:55.674 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:55.674 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200028864140 with size: 0.000244 MiB 00:05:55.674 element at address: 0x200028864240 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20002886af00 with size: 0.000244 MiB 00:05:55.674 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:55.675 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:55.675 list of memzone associated elements. size: 607.930908 MiB 00:05:55.675 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:55.675 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:55.675 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:55.675 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:55.675 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:55.675 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_60603_0 00:05:55.675 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:55.675 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60603_0 00:05:55.675 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:55.675 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_60603_0 00:05:55.675 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:55.675 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:55.675 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:55.675 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:55.675 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:55.675 associated memzone info: size: 3.000122 MiB name: MP_evtpool_60603_0 00:05:55.675 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:55.675 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60603 00:05:55.675 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:55.675 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60603 00:05:55.675 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:55.675 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:55.675 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:55.675 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:55.675 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:55.675 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:55.675 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:55.675 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:55.675 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:55.675 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60603 00:05:55.675 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:55.675 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60603 00:05:55.675 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:55.675 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60603 00:05:55.675 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:55.675 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60603 00:05:55.675 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:55.675 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_60603 00:05:55.675 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:55.675 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60603 00:05:55.675 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:55.675 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:55.675 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:55.675 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:55.675 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:55.675 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:55.675 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:55.675 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_60603 00:05:55.675 element at address: 0x20000085e880 with size: 0.125549 MiB 00:05:55.675 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60603 00:05:55.675 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:55.675 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:55.675 element at address: 0x200028864340 with size: 0.023804 MiB 00:05:55.675 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:55.675 element at address: 0x20000085a640 with size: 0.016174 MiB 00:05:55.675 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60603 00:05:55.675 element at address: 0x20002886a4c0 with size: 0.002502 MiB 00:05:55.675 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:55.675 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:55.675 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60603 00:05:55.675 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:55.675 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_60603 00:05:55.675 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:55.675 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60603 00:05:55.675 element at address: 0x20002886b000 with size: 0.000366 MiB 00:05:55.675 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:55.675 09:37:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:55.675 09:37:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60603 00:05:55.676 09:37:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 60603 ']' 00:05:55.676 09:37:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 60603 00:05:55.676 09:37:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:55.676 09:37:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.676 09:37:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60603 00:05:55.676 09:37:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.676 09:37:43 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.676 killing process with pid 60603 00:05:55.676 09:37:43 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60603' 00:05:55.676 09:37:43 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 60603 00:05:55.676 09:37:43 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 60603 00:05:57.580 00:05:57.580 real 0m3.295s 00:05:57.580 user 0m3.278s 00:05:57.580 sys 0m0.601s 00:05:57.580 09:37:44 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.580 09:37:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:57.580 ************************************ 00:05:57.580 END TEST dpdk_mem_utility 00:05:57.580 ************************************ 00:05:57.580 09:37:44 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:57.580 09:37:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.580 09:37:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.580 09:37:44 -- common/autotest_common.sh@10 -- # set +x 00:05:57.580 ************************************ 00:05:57.580 START TEST event 00:05:57.580 ************************************ 00:05:57.580 09:37:44 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:57.580 * Looking for test storage... 00:05:57.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:57.580 09:37:45 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:57.580 09:37:45 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:57.580 09:37:45 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.580 09:37:45 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.580 09:37:45 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.580 09:37:45 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.580 09:37:45 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.580 09:37:45 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.580 09:37:45 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.580 09:37:45 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.580 09:37:45 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.580 09:37:45 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.580 09:37:45 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.580 09:37:45 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.580 09:37:45 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.580 09:37:45 event -- scripts/common.sh@344 -- # case "$op" in 00:05:57.580 09:37:45 event -- scripts/common.sh@345 -- # : 1 00:05:57.580 09:37:45 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.580 09:37:45 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.580 09:37:45 event -- scripts/common.sh@365 -- # decimal 1 00:05:57.580 09:37:45 event -- scripts/common.sh@353 -- # local d=1 00:05:57.580 09:37:45 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.580 09:37:45 event -- scripts/common.sh@355 -- # echo 1 00:05:57.580 09:37:45 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.580 09:37:45 event -- scripts/common.sh@366 -- # decimal 2 00:05:57.580 09:37:45 event -- scripts/common.sh@353 -- # local d=2 00:05:57.580 09:37:45 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.580 09:37:45 event -- scripts/common.sh@355 -- # echo 2 00:05:57.580 09:37:45 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.580 09:37:45 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.580 09:37:45 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.580 09:37:45 event -- scripts/common.sh@368 -- # return 0 00:05:57.580 09:37:45 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.580 09:37:45 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.580 --rc genhtml_branch_coverage=1 00:05:57.580 --rc genhtml_function_coverage=1 00:05:57.580 --rc genhtml_legend=1 00:05:57.580 --rc geninfo_all_blocks=1 00:05:57.580 --rc geninfo_unexecuted_blocks=1 00:05:57.580 00:05:57.580 ' 00:05:57.580 09:37:45 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.580 --rc genhtml_branch_coverage=1 00:05:57.580 --rc genhtml_function_coverage=1 00:05:57.580 --rc genhtml_legend=1 00:05:57.580 --rc geninfo_all_blocks=1 00:05:57.580 --rc geninfo_unexecuted_blocks=1 00:05:57.580 00:05:57.580 ' 00:05:57.580 09:37:45 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.580 --rc genhtml_branch_coverage=1 00:05:57.580 --rc genhtml_function_coverage=1 00:05:57.580 --rc genhtml_legend=1 00:05:57.580 --rc geninfo_all_blocks=1 00:05:57.580 --rc geninfo_unexecuted_blocks=1 00:05:57.580 00:05:57.580 ' 00:05:57.580 09:37:45 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.580 --rc genhtml_branch_coverage=1 00:05:57.580 --rc genhtml_function_coverage=1 00:05:57.580 --rc genhtml_legend=1 00:05:57.580 --rc geninfo_all_blocks=1 00:05:57.580 --rc geninfo_unexecuted_blocks=1 00:05:57.580 00:05:57.580 ' 00:05:57.580 09:37:45 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:57.580 09:37:45 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:57.580 09:37:45 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:57.580 09:37:45 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:57.580 09:37:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.580 09:37:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.580 ************************************ 00:05:57.580 START TEST event_perf 00:05:57.580 ************************************ 00:05:57.580 09:37:45 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:57.581 Running I/O for 1 seconds...[2024-12-05 09:37:45.199337] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:57.581 [2024-12-05 09:37:45.199521] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60729 ] 00:05:57.839 [2024-12-05 09:37:45.377308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.096 [2024-12-05 09:37:45.481351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.096 [2024-12-05 09:37:45.481519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.096 [2024-12-05 09:37:45.482537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.096 [2024-12-05 09:37:45.482582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.030 Running I/O for 1 seconds... 00:05:59.030 lcore 0: 203417 00:05:59.030 lcore 1: 203415 00:05:59.030 lcore 2: 203415 00:05:59.030 lcore 3: 203415 00:05:59.288 done. 00:05:59.288 00:05:59.288 real 0m1.518s 00:05:59.288 user 0m4.286s 00:05:59.288 sys 0m0.112s 00:05:59.288 09:37:46 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.288 09:37:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:59.288 ************************************ 00:05:59.288 END TEST event_perf 00:05:59.288 ************************************ 00:05:59.288 09:37:46 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:59.288 09:37:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:59.288 09:37:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.288 09:37:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.288 ************************************ 00:05:59.288 START TEST event_reactor 00:05:59.288 ************************************ 00:05:59.288 09:37:46 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:59.288 [2024-12-05 09:37:46.758737] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:59.288 [2024-12-05 09:37:46.758887] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60763 ] 00:05:59.547 [2024-12-05 09:37:46.923741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.547 [2024-12-05 09:37:47.025397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.925 test_start 00:06:00.925 oneshot 00:06:00.925 tick 100 00:06:00.925 tick 100 00:06:00.925 tick 250 00:06:00.925 tick 100 00:06:00.925 tick 100 00:06:00.925 tick 100 00:06:00.925 tick 250 00:06:00.925 tick 500 00:06:00.925 tick 100 00:06:00.925 tick 100 00:06:00.925 tick 250 00:06:00.925 tick 100 00:06:00.925 tick 100 00:06:00.925 test_end 00:06:00.925 00:06:00.925 real 0m1.496s 00:06:00.925 user 0m1.297s 00:06:00.925 sys 0m0.091s 00:06:00.925 09:37:48 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.925 09:37:48 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:00.925 ************************************ 00:06:00.925 END TEST event_reactor 00:06:00.925 ************************************ 00:06:00.925 09:37:48 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:00.925 09:37:48 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:00.925 09:37:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.925 09:37:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.925 ************************************ 00:06:00.925 START TEST event_reactor_perf 00:06:00.925 ************************************ 00:06:00.925 09:37:48 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:00.925 [2024-12-05 09:37:48.310480] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:00.925 [2024-12-05 09:37:48.310634] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60805 ] 00:06:00.925 [2024-12-05 09:37:48.491167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.184 [2024-12-05 09:37:48.591036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.564 test_start 00:06:02.564 test_end 00:06:02.564 Performance: 357709 events per second 00:06:02.564 00:06:02.564 real 0m1.506s 00:06:02.564 user 0m1.307s 00:06:02.564 sys 0m0.092s 00:06:02.564 09:37:49 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.564 ************************************ 00:06:02.564 09:37:49 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.564 END TEST event_reactor_perf 00:06:02.564 ************************************ 00:06:02.564 09:37:49 event -- event/event.sh@49 -- # uname -s 00:06:02.564 09:37:49 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:02.564 09:37:49 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:02.564 09:37:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.564 09:37:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.564 09:37:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.564 ************************************ 00:06:02.564 START TEST event_scheduler 00:06:02.564 ************************************ 00:06:02.564 09:37:49 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:02.564 * Looking for test storage... 00:06:02.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:02.564 09:37:49 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:02.564 09:37:49 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:02.564 09:37:49 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:02.564 09:37:50 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.564 09:37:50 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:02.564 09:37:50 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.564 09:37:50 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:02.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.564 --rc genhtml_branch_coverage=1 00:06:02.564 --rc genhtml_function_coverage=1 00:06:02.564 --rc genhtml_legend=1 00:06:02.564 --rc geninfo_all_blocks=1 00:06:02.564 --rc geninfo_unexecuted_blocks=1 00:06:02.564 00:06:02.564 ' 00:06:02.564 09:37:50 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:02.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.564 --rc genhtml_branch_coverage=1 00:06:02.564 --rc genhtml_function_coverage=1 00:06:02.564 --rc genhtml_legend=1 00:06:02.564 --rc geninfo_all_blocks=1 00:06:02.564 --rc geninfo_unexecuted_blocks=1 00:06:02.564 00:06:02.564 ' 00:06:02.564 09:37:50 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:02.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.564 --rc genhtml_branch_coverage=1 00:06:02.564 --rc genhtml_function_coverage=1 00:06:02.564 --rc genhtml_legend=1 00:06:02.564 --rc geninfo_all_blocks=1 00:06:02.564 --rc geninfo_unexecuted_blocks=1 00:06:02.564 00:06:02.564 ' 00:06:02.564 09:37:50 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:02.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.564 --rc genhtml_branch_coverage=1 00:06:02.564 --rc genhtml_function_coverage=1 00:06:02.564 --rc genhtml_legend=1 00:06:02.564 --rc geninfo_all_blocks=1 00:06:02.564 --rc geninfo_unexecuted_blocks=1 00:06:02.564 00:06:02.564 ' 00:06:02.564 09:37:50 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:02.564 09:37:50 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60870 00:06:02.564 09:37:50 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.564 09:37:50 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:02.564 09:37:50 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60870 00:06:02.564 09:37:50 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60870 ']' 00:06:02.564 09:37:50 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.564 09:37:50 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.564 09:37:50 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.564 09:37:50 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.564 09:37:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.564 [2024-12-05 09:37:50.138440] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:02.564 [2024-12-05 09:37:50.138630] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60870 ] 00:06:02.822 [2024-12-05 09:37:50.330027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.080 [2024-12-05 09:37:50.481954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.080 [2024-12-05 09:37:50.482080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.080 [2024-12-05 09:37:50.483013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.080 [2024-12-05 09:37:50.483019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.685 09:37:51 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.685 09:37:51 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:03.685 09:37:51 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:03.685 09:37:51 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.685 09:37:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.685 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.685 POWER: Cannot set governor of lcore 0 to userspace 00:06:03.685 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.685 POWER: Cannot set governor of lcore 0 to performance 00:06:03.685 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.685 POWER: Cannot set governor of lcore 0 to userspace 00:06:03.685 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.685 POWER: Cannot set governor of lcore 0 to userspace 00:06:03.685 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:03.685 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:03.685 POWER: Unable to set Power Management Environment for lcore 0 00:06:03.685 [2024-12-05 09:37:51.117118] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:03.685 [2024-12-05 09:37:51.117142] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:03.685 [2024-12-05 09:37:51.117156] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:03.685 [2024-12-05 09:37:51.117179] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:03.685 [2024-12-05 09:37:51.117194] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:03.685 [2024-12-05 09:37:51.117208] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:03.685 09:37:51 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.685 09:37:51 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:03.685 09:37:51 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.685 09:37:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.952 [2024-12-05 09:37:51.407327] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:03.952 09:37:51 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.953 09:37:51 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:03.953 09:37:51 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.953 09:37:51 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.953 09:37:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.953 ************************************ 00:06:03.953 START TEST scheduler_create_thread 00:06:03.953 ************************************ 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.953 2 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.953 3 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.953 4 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.953 5 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.953 6 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.953 7 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.953 8 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.953 9 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.953 10 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.953 09:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.890 09:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.890 09:37:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:04.890 09:37:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:04.890 09:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.890 09:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.268 09:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.268 00:06:06.268 real 0m2.134s 00:06:06.268 user 0m0.017s 00:06:06.268 sys 0m0.006s 00:06:06.268 09:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.268 09:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.268 ************************************ 00:06:06.268 END TEST scheduler_create_thread 00:06:06.269 ************************************ 00:06:06.269 09:37:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:06.269 09:37:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60870 00:06:06.269 09:37:53 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60870 ']' 00:06:06.269 09:37:53 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60870 00:06:06.269 09:37:53 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:06.269 09:37:53 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.269 09:37:53 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60870 00:06:06.269 09:37:53 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:06.269 09:37:53 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:06.269 killing process with pid 60870 00:06:06.269 09:37:53 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60870' 00:06:06.269 09:37:53 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60870 00:06:06.269 09:37:53 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60870 00:06:06.528 [2024-12-05 09:37:54.034945] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:07.463 00:06:07.463 real 0m5.097s 00:06:07.463 user 0m8.724s 00:06:07.463 sys 0m0.504s 00:06:07.463 09:37:54 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.463 09:37:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.463 ************************************ 00:06:07.463 END TEST event_scheduler 00:06:07.463 ************************************ 00:06:07.463 09:37:54 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:07.463 09:37:54 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:07.463 09:37:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.463 09:37:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.463 09:37:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.463 ************************************ 00:06:07.463 START TEST app_repeat 00:06:07.463 ************************************ 00:06:07.463 09:37:54 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:07.463 09:37:54 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.463 09:37:54 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.463 09:37:54 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:07.463 09:37:54 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.463 09:37:54 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:07.463 09:37:54 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:07.463 09:37:54 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:07.463 09:37:54 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60994 00:06:07.463 09:37:54 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.463 Process app_repeat pid: 60994 00:06:07.464 09:37:54 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:07.464 09:37:54 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60994' 00:06:07.464 09:37:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:07.464 spdk_app_start Round 0 00:06:07.464 09:37:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:07.464 09:37:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60994 /var/tmp/spdk-nbd.sock 00:06:07.464 09:37:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60994 ']' 00:06:07.464 09:37:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.464 09:37:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.464 09:37:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.464 09:37:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.464 09:37:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.464 [2024-12-05 09:37:55.052866] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:07.464 [2024-12-05 09:37:55.053087] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60994 ] 00:06:07.722 [2024-12-05 09:37:55.232730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.722 [2024-12-05 09:37:55.341196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.722 [2024-12-05 09:37:55.341197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.657 09:37:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.657 09:37:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:08.657 09:37:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.916 Malloc0 00:06:08.916 09:37:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.175 Malloc1 00:06:09.175 09:37:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.175 09:37:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.175 09:37:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.175 09:37:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.175 09:37:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.175 09:37:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.175 09:37:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.175 09:37:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.175 09:37:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.175 09:37:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.175 09:37:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.175 09:37:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.175 09:37:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:09.175 09:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.175 09:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.175 09:37:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:09.434 /dev/nbd0 00:06:09.434 09:37:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.434 09:37:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.434 09:37:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:09.434 09:37:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:09.434 09:37:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:09.434 09:37:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:09.434 09:37:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:09.434 09:37:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:09.434 09:37:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:09.434 09:37:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:09.434 09:37:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.434 1+0 records in 00:06:09.434 1+0 records out 00:06:09.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303997 s, 13.5 MB/s 00:06:09.434 09:37:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.434 09:37:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:09.434 09:37:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.434 09:37:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:09.434 09:37:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:09.434 09:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.434 09:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.434 09:37:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.693 /dev/nbd1 00:06:09.693 09:37:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.693 09:37:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.693 09:37:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:09.693 09:37:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:09.693 09:37:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:09.693 09:37:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:09.693 09:37:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:09.693 09:37:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:09.693 09:37:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:09.693 09:37:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:09.693 09:37:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.693 1+0 records in 00:06:09.693 1+0 records out 00:06:09.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368896 s, 11.1 MB/s 00:06:09.693 09:37:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.693 09:37:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:09.693 09:37:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.693 09:37:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:09.693 09:37:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:09.693 09:37:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.693 09:37:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.693 09:37:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.950 09:37:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.950 09:37:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.950 09:37:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.950 { 00:06:09.950 "bdev_name": "Malloc0", 00:06:09.950 "nbd_device": "/dev/nbd0" 00:06:09.950 }, 00:06:09.950 { 00:06:09.950 "bdev_name": "Malloc1", 00:06:09.950 "nbd_device": "/dev/nbd1" 00:06:09.950 } 00:06:09.950 ]' 00:06:09.950 09:37:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.950 { 00:06:09.950 "bdev_name": "Malloc0", 00:06:09.950 "nbd_device": "/dev/nbd0" 00:06:09.951 }, 00:06:09.951 { 00:06:09.951 "bdev_name": "Malloc1", 00:06:09.951 "nbd_device": "/dev/nbd1" 00:06:09.951 } 00:06:09.951 ]' 00:06:09.951 09:37:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.209 /dev/nbd1' 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.209 /dev/nbd1' 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.209 256+0 records in 00:06:10.209 256+0 records out 00:06:10.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00652427 s, 161 MB/s 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:10.209 256+0 records in 00:06:10.209 256+0 records out 00:06:10.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271539 s, 38.6 MB/s 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:10.209 256+0 records in 00:06:10.209 256+0 records out 00:06:10.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0331361 s, 31.6 MB/s 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.209 09:37:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.467 09:37:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.467 09:37:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.467 09:37:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.467 09:37:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.467 09:37:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.467 09:37:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.467 09:37:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.467 09:37:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.467 09:37:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.467 09:37:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.725 09:37:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.725 09:37:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.725 09:37:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.725 09:37:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.725 09:37:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.725 09:37:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.725 09:37:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.725 09:37:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.725 09:37:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.725 09:37:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.725 09:37:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.983 09:37:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.983 09:37:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.983 09:37:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.983 09:37:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.983 09:37:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.983 09:37:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.983 09:37:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:10.983 09:37:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.983 09:37:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.240 09:37:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.240 09:37:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.240 09:37:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.240 09:37:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.499 09:37:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.433 [2024-12-05 09:37:59.922265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.433 [2024-12-05 09:38:00.013335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.433 [2024-12-05 09:38:00.013381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.691 [2024-12-05 09:38:00.179419] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.691 [2024-12-05 09:38:00.179535] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.593 spdk_app_start Round 1 00:06:14.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.593 09:38:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:14.593 09:38:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:14.593 09:38:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60994 /var/tmp/spdk-nbd.sock 00:06:14.594 09:38:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60994 ']' 00:06:14.594 09:38:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.594 09:38:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.594 09:38:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.594 09:38:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.594 09:38:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.853 09:38:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.853 09:38:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:14.853 09:38:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.112 Malloc0 00:06:15.112 09:38:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.373 Malloc1 00:06:15.373 09:38:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.373 09:38:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.373 09:38:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.373 09:38:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.373 09:38:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.373 09:38:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.373 09:38:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.373 09:38:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.373 09:38:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.373 09:38:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.373 09:38:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.373 09:38:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.373 09:38:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.373 09:38:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.373 09:38:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.373 09:38:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.631 /dev/nbd0 00:06:15.631 09:38:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.631 09:38:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.631 09:38:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:15.631 09:38:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:15.631 09:38:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.631 09:38:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.631 09:38:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:15.631 09:38:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:15.631 09:38:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.631 09:38:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.631 09:38:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.631 1+0 records in 00:06:15.631 1+0 records out 00:06:15.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301666 s, 13.6 MB/s 00:06:15.631 09:38:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.631 09:38:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:15.631 09:38:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.631 09:38:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.631 09:38:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:15.631 09:38:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.631 09:38:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.631 09:38:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.890 /dev/nbd1 00:06:15.890 09:38:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.890 09:38:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.890 09:38:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:15.890 09:38:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:15.890 09:38:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.890 09:38:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.890 09:38:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:15.890 09:38:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:15.890 09:38:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.890 09:38:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.890 09:38:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.890 1+0 records in 00:06:15.890 1+0 records out 00:06:15.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329039 s, 12.4 MB/s 00:06:15.890 09:38:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.890 09:38:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:15.890 09:38:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.148 09:38:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.148 09:38:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:16.148 09:38:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.148 09:38:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.148 09:38:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.148 09:38:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.148 09:38:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.407 09:38:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.407 { 00:06:16.407 "bdev_name": "Malloc0", 00:06:16.407 "nbd_device": "/dev/nbd0" 00:06:16.407 }, 00:06:16.407 { 00:06:16.407 "bdev_name": "Malloc1", 00:06:16.407 "nbd_device": "/dev/nbd1" 00:06:16.407 } 00:06:16.407 ]' 00:06:16.407 09:38:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.407 { 00:06:16.407 "bdev_name": "Malloc0", 00:06:16.407 "nbd_device": "/dev/nbd0" 00:06:16.407 }, 00:06:16.407 { 00:06:16.407 "bdev_name": "Malloc1", 00:06:16.407 "nbd_device": "/dev/nbd1" 00:06:16.407 } 00:06:16.407 ]' 00:06:16.407 09:38:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.407 09:38:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.407 /dev/nbd1' 00:06:16.407 09:38:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.407 /dev/nbd1' 00:06:16.407 09:38:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.407 09:38:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.407 09:38:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.407 09:38:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.407 09:38:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.407 09:38:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.407 09:38:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.407 09:38:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.407 09:38:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.408 256+0 records in 00:06:16.408 256+0 records out 00:06:16.408 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00677784 s, 155 MB/s 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.408 256+0 records in 00:06:16.408 256+0 records out 00:06:16.408 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281598 s, 37.2 MB/s 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.408 256+0 records in 00:06:16.408 256+0 records out 00:06:16.408 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0326703 s, 32.1 MB/s 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.408 09:38:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.975 09:38:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.234 09:38:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.234 09:38:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.234 09:38:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.492 09:38:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.493 09:38:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.493 09:38:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.493 09:38:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:17.493 09:38:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.493 09:38:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.493 09:38:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.493 09:38:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.493 09:38:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.493 09:38:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.750 09:38:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:18.751 [2024-12-05 09:38:06.210404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.751 [2024-12-05 09:38:06.299575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.751 [2024-12-05 09:38:06.299583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.010 [2024-12-05 09:38:06.460785] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.010 [2024-12-05 09:38:06.460916] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.915 spdk_app_start Round 2 00:06:20.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.915 09:38:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.915 09:38:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:20.915 09:38:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60994 /var/tmp/spdk-nbd.sock 00:06:20.915 09:38:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60994 ']' 00:06:20.915 09:38:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.915 09:38:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.915 09:38:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.915 09:38:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.915 09:38:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.174 09:38:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.174 09:38:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:21.174 09:38:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.434 Malloc0 00:06:21.434 09:38:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.692 Malloc1 00:06:21.692 09:38:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.692 09:38:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.692 09:38:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.692 09:38:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.692 09:38:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.692 09:38:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.692 09:38:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.692 09:38:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.692 09:38:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.692 09:38:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.692 09:38:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.692 09:38:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.692 09:38:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:21.692 09:38:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.692 09:38:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.692 09:38:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.951 /dev/nbd0 00:06:21.952 09:38:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.952 09:38:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.952 09:38:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:21.952 09:38:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:21.952 09:38:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.952 09:38:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.952 09:38:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:21.952 09:38:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:21.952 09:38:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.952 09:38:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.952 09:38:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.952 1+0 records in 00:06:21.952 1+0 records out 00:06:21.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267207 s, 15.3 MB/s 00:06:21.952 09:38:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.952 09:38:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:21.952 09:38:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.952 09:38:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.952 09:38:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:21.952 09:38:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.952 09:38:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.952 09:38:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.210 /dev/nbd1 00:06:22.210 09:38:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.211 09:38:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.211 09:38:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:22.211 09:38:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:22.211 09:38:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.211 09:38:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.211 09:38:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:22.211 09:38:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:22.211 09:38:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.211 09:38:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.211 09:38:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.211 1+0 records in 00:06:22.211 1+0 records out 00:06:22.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257339 s, 15.9 MB/s 00:06:22.211 09:38:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.211 09:38:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:22.211 09:38:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.211 09:38:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.211 09:38:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:22.211 09:38:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.211 09:38:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.211 09:38:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.211 09:38:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.211 09:38:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.779 { 00:06:22.779 "bdev_name": "Malloc0", 00:06:22.779 "nbd_device": "/dev/nbd0" 00:06:22.779 }, 00:06:22.779 { 00:06:22.779 "bdev_name": "Malloc1", 00:06:22.779 "nbd_device": "/dev/nbd1" 00:06:22.779 } 00:06:22.779 ]' 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.779 { 00:06:22.779 "bdev_name": "Malloc0", 00:06:22.779 "nbd_device": "/dev/nbd0" 00:06:22.779 }, 00:06:22.779 { 00:06:22.779 "bdev_name": "Malloc1", 00:06:22.779 "nbd_device": "/dev/nbd1" 00:06:22.779 } 00:06:22.779 ]' 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.779 /dev/nbd1' 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.779 /dev/nbd1' 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:22.779 256+0 records in 00:06:22.779 256+0 records out 00:06:22.779 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00898094 s, 117 MB/s 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.779 256+0 records in 00:06:22.779 256+0 records out 00:06:22.779 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246323 s, 42.6 MB/s 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.779 256+0 records in 00:06:22.779 256+0 records out 00:06:22.779 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0367132 s, 28.6 MB/s 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.779 09:38:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.038 09:38:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.038 09:38:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.038 09:38:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.038 09:38:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.038 09:38:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.038 09:38:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.038 09:38:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.038 09:38:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.038 09:38:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.038 09:38:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.297 09:38:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.297 09:38:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.297 09:38:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.297 09:38:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.297 09:38:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.297 09:38:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.297 09:38:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.297 09:38:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.297 09:38:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.297 09:38:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.297 09:38:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.864 09:38:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.864 09:38:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.864 09:38:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.864 09:38:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.864 09:38:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.864 09:38:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.864 09:38:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:23.864 09:38:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.864 09:38:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.864 09:38:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.864 09:38:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.864 09:38:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.864 09:38:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.123 09:38:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:25.059 [2024-12-05 09:38:12.580033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.059 [2024-12-05 09:38:12.669883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.059 [2024-12-05 09:38:12.669894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.318 [2024-12-05 09:38:12.834843] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.318 [2024-12-05 09:38:12.834987] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.222 09:38:14 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60994 /var/tmp/spdk-nbd.sock 00:06:27.222 09:38:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60994 ']' 00:06:27.222 09:38:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.222 09:38:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.222 09:38:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.222 09:38:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.222 09:38:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.481 09:38:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.481 09:38:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:27.481 09:38:15 event.app_repeat -- event/event.sh@39 -- # killprocess 60994 00:06:27.481 09:38:15 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60994 ']' 00:06:27.481 09:38:15 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60994 00:06:27.481 09:38:15 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:27.481 09:38:15 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.481 09:38:15 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60994 00:06:27.481 killing process with pid 60994 00:06:27.481 09:38:15 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.481 09:38:15 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.481 09:38:15 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60994' 00:06:27.481 09:38:15 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60994 00:06:27.481 09:38:15 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60994 00:06:28.415 spdk_app_start is called in Round 0. 00:06:28.415 Shutdown signal received, stop current app iteration 00:06:28.415 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 reinitialization... 00:06:28.415 spdk_app_start is called in Round 1. 00:06:28.415 Shutdown signal received, stop current app iteration 00:06:28.415 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 reinitialization... 00:06:28.415 spdk_app_start is called in Round 2. 00:06:28.415 Shutdown signal received, stop current app iteration 00:06:28.415 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 reinitialization... 00:06:28.415 spdk_app_start is called in Round 3. 00:06:28.415 Shutdown signal received, stop current app iteration 00:06:28.415 09:38:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:28.415 09:38:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:28.415 00:06:28.415 real 0m20.861s 00:06:28.415 user 0m46.151s 00:06:28.415 sys 0m3.195s 00:06:28.415 ************************************ 00:06:28.415 END TEST app_repeat 00:06:28.415 ************************************ 00:06:28.415 09:38:15 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.415 09:38:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.415 09:38:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:28.415 09:38:15 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:28.415 09:38:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.415 09:38:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.415 09:38:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.415 ************************************ 00:06:28.415 START TEST cpu_locks 00:06:28.415 ************************************ 00:06:28.415 09:38:15 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:28.415 * Looking for test storage... 00:06:28.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:28.415 09:38:15 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:28.415 09:38:15 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:28.415 09:38:15 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:28.674 09:38:16 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:28.674 09:38:16 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.674 09:38:16 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.674 09:38:16 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.674 09:38:16 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.674 09:38:16 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.674 09:38:16 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.674 09:38:16 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.674 09:38:16 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.674 09:38:16 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.674 09:38:16 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.674 09:38:16 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.674 09:38:16 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:28.675 09:38:16 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:28.675 09:38:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.675 09:38:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.675 09:38:16 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:28.675 09:38:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:28.675 09:38:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.675 09:38:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:28.675 09:38:16 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.675 09:38:16 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:28.675 09:38:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:28.675 09:38:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.675 09:38:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:28.675 09:38:16 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.675 09:38:16 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.675 09:38:16 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.675 09:38:16 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:28.675 09:38:16 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.675 09:38:16 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:28.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.675 --rc genhtml_branch_coverage=1 00:06:28.675 --rc genhtml_function_coverage=1 00:06:28.675 --rc genhtml_legend=1 00:06:28.675 --rc geninfo_all_blocks=1 00:06:28.675 --rc geninfo_unexecuted_blocks=1 00:06:28.675 00:06:28.675 ' 00:06:28.675 09:38:16 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:28.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.675 --rc genhtml_branch_coverage=1 00:06:28.675 --rc genhtml_function_coverage=1 00:06:28.675 --rc genhtml_legend=1 00:06:28.675 --rc geninfo_all_blocks=1 00:06:28.675 --rc geninfo_unexecuted_blocks=1 00:06:28.675 00:06:28.675 ' 00:06:28.675 09:38:16 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:28.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.675 --rc genhtml_branch_coverage=1 00:06:28.675 --rc genhtml_function_coverage=1 00:06:28.675 --rc genhtml_legend=1 00:06:28.675 --rc geninfo_all_blocks=1 00:06:28.675 --rc geninfo_unexecuted_blocks=1 00:06:28.675 00:06:28.675 ' 00:06:28.675 09:38:16 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:28.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.675 --rc genhtml_branch_coverage=1 00:06:28.675 --rc genhtml_function_coverage=1 00:06:28.675 --rc genhtml_legend=1 00:06:28.675 --rc geninfo_all_blocks=1 00:06:28.675 --rc geninfo_unexecuted_blocks=1 00:06:28.675 00:06:28.675 ' 00:06:28.675 09:38:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:28.675 09:38:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:28.675 09:38:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:28.675 09:38:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:28.675 09:38:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.675 09:38:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.675 09:38:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.675 ************************************ 00:06:28.675 START TEST default_locks 00:06:28.675 ************************************ 00:06:28.675 09:38:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:28.675 09:38:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61650 00:06:28.675 09:38:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61650 00:06:28.675 09:38:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 61650 ']' 00:06:28.675 09:38:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.675 09:38:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.675 09:38:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.675 09:38:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.675 09:38:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.675 09:38:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.675 [2024-12-05 09:38:16.240975] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:28.675 [2024-12-05 09:38:16.242036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61650 ] 00:06:28.934 [2024-12-05 09:38:16.426895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.934 [2024-12-05 09:38:16.525507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.886 09:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.886 09:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:29.886 09:38:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61650 00:06:29.886 09:38:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61650 00:06:29.886 09:38:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.886 09:38:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61650 00:06:29.886 09:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 61650 ']' 00:06:29.886 09:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 61650 00:06:29.886 09:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:29.886 09:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.886 09:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61650 00:06:30.145 09:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.145 09:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.145 killing process with pid 61650 00:06:30.145 09:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61650' 00:06:30.145 09:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 61650 00:06:30.145 09:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 61650 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61650 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61650 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 61650 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 61650 ']' 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.050 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61650) - No such process 00:06:32.050 ERROR: process (pid: 61650) is no longer running 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:32.050 00:06:32.050 real 0m3.249s 00:06:32.050 user 0m3.194s 00:06:32.050 sys 0m0.657s 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.050 ************************************ 00:06:32.050 END TEST default_locks 00:06:32.050 ************************************ 00:06:32.050 09:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.050 09:38:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:32.050 09:38:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.050 09:38:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.050 09:38:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.050 ************************************ 00:06:32.050 START TEST default_locks_via_rpc 00:06:32.050 ************************************ 00:06:32.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.050 09:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:32.050 09:38:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61730 00:06:32.050 09:38:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61730 00:06:32.050 09:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61730 ']' 00:06:32.050 09:38:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.050 09:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.050 09:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.050 09:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.050 09:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.050 09:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.050 [2024-12-05 09:38:19.541252] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:32.050 [2024-12-05 09:38:19.541467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61730 ] 00:06:32.309 [2024-12-05 09:38:19.722799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.309 [2024-12-05 09:38:19.820799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61730 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61730 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61730 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 61730 ']' 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 61730 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.253 09:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61730 00:06:33.528 killing process with pid 61730 00:06:33.528 09:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.528 09:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.528 09:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61730' 00:06:33.528 09:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 61730 00:06:33.528 09:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 61730 00:06:35.430 00:06:35.430 real 0m3.297s 00:06:35.430 user 0m3.228s 00:06:35.430 sys 0m0.675s 00:06:35.430 09:38:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.430 ************************************ 00:06:35.431 09:38:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.431 END TEST default_locks_via_rpc 00:06:35.431 ************************************ 00:06:35.431 09:38:22 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:35.431 09:38:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.431 09:38:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.431 09:38:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.431 ************************************ 00:06:35.431 START TEST non_locking_app_on_locked_coremask 00:06:35.431 ************************************ 00:06:35.431 09:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:35.431 09:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61812 00:06:35.431 09:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.431 09:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61812 /var/tmp/spdk.sock 00:06:35.431 09:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61812 ']' 00:06:35.431 09:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.431 09:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.431 09:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.431 09:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.431 09:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.431 [2024-12-05 09:38:22.898550] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:35.431 [2024-12-05 09:38:22.898759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61812 ] 00:06:35.690 [2024-12-05 09:38:23.083014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.690 [2024-12-05 09:38:23.181110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.646 09:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.646 09:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:36.646 09:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61840 00:06:36.646 09:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:36.646 09:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61840 /var/tmp/spdk2.sock 00:06:36.646 09:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61840 ']' 00:06:36.646 09:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.646 09:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.646 09:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.646 09:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.646 09:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.646 [2024-12-05 09:38:24.049808] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:36.646 [2024-12-05 09:38:24.050015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61840 ] 00:06:36.646 [2024-12-05 09:38:24.239642] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.646 [2024-12-05 09:38:24.239692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.904 [2024-12-05 09:38:24.443762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.281 09:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.281 09:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.281 09:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61812 00:06:38.281 09:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61812 00:06:38.281 09:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.264 09:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61812 00:06:39.264 09:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61812 ']' 00:06:39.264 09:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61812 00:06:39.264 09:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:39.264 09:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.264 09:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61812 00:06:39.264 09:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.264 killing process with pid 61812 00:06:39.264 09:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.264 09:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61812' 00:06:39.264 09:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61812 00:06:39.264 09:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61812 00:06:43.454 09:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61840 00:06:43.454 09:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61840 ']' 00:06:43.454 09:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61840 00:06:43.454 09:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:43.454 09:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.454 09:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61840 00:06:43.454 09:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.454 killing process with pid 61840 00:06:43.454 09:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.454 09:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61840' 00:06:43.454 09:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61840 00:06:43.454 09:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61840 00:06:44.846 00:06:44.846 real 0m9.402s 00:06:44.846 user 0m9.704s 00:06:44.846 sys 0m1.469s 00:06:44.846 ************************************ 00:06:44.846 END TEST non_locking_app_on_locked_coremask 00:06:44.846 ************************************ 00:06:44.846 09:38:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.846 09:38:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.846 09:38:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:44.846 09:38:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.846 09:38:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.846 09:38:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.846 ************************************ 00:06:44.846 START TEST locking_app_on_unlocked_coremask 00:06:44.846 ************************************ 00:06:44.846 09:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:44.846 09:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61982 00:06:44.846 09:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:44.846 09:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61982 /var/tmp/spdk.sock 00:06:44.846 09:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61982 ']' 00:06:44.846 09:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.846 09:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.846 09:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.846 09:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.846 09:38:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.846 [2024-12-05 09:38:32.311341] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:44.846 [2024-12-05 09:38:32.311526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61982 ] 00:06:45.106 [2024-12-05 09:38:32.476776] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.106 [2024-12-05 09:38:32.476837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.106 [2024-12-05 09:38:32.578378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.044 09:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.044 09:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:46.044 09:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62010 00:06:46.044 09:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 62010 /var/tmp/spdk2.sock 00:06:46.044 09:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.044 09:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 62010 ']' 00:06:46.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.044 09:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.044 09:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.044 09:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.044 09:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.044 09:38:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.044 [2024-12-05 09:38:33.450246] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:46.044 [2024-12-05 09:38:33.450453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62010 ] 00:06:46.044 [2024-12-05 09:38:33.639236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.303 [2024-12-05 09:38:33.836918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.689 09:38:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.689 09:38:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:47.689 09:38:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 62010 00:06:47.689 09:38:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62010 00:06:47.689 09:38:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.626 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61982 00:06:48.626 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61982 ']' 00:06:48.626 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61982 00:06:48.626 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:48.626 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.626 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61982 00:06:48.626 killing process with pid 61982 00:06:48.626 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.626 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.626 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61982' 00:06:48.626 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61982 00:06:48.626 09:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61982 00:06:52.817 09:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 62010 00:06:52.817 09:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 62010 ']' 00:06:52.817 09:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 62010 00:06:52.817 09:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:52.817 09:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.817 09:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62010 00:06:52.817 killing process with pid 62010 00:06:52.817 09:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.817 09:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.817 09:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62010' 00:06:52.817 09:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 62010 00:06:52.817 09:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 62010 00:06:54.253 ************************************ 00:06:54.253 END TEST locking_app_on_unlocked_coremask 00:06:54.253 ************************************ 00:06:54.253 00:06:54.253 real 0m9.377s 00:06:54.253 user 0m9.759s 00:06:54.253 sys 0m1.395s 00:06:54.253 09:38:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.253 09:38:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.253 09:38:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:54.253 09:38:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.253 09:38:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.253 09:38:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.253 ************************************ 00:06:54.253 START TEST locking_app_on_locked_coremask 00:06:54.253 ************************************ 00:06:54.253 09:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:54.253 09:38:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=62148 00:06:54.253 09:38:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.253 09:38:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 62148 /var/tmp/spdk.sock 00:06:54.253 09:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 62148 ']' 00:06:54.253 09:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.253 09:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.253 09:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.253 09:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.253 09:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.253 [2024-12-05 09:38:41.743453] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:54.254 [2024-12-05 09:38:41.743634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62148 ] 00:06:54.512 [2024-12-05 09:38:41.907720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.512 [2024-12-05 09:38:42.014386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.449 09:38:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.449 09:38:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:55.449 09:38:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62177 00:06:55.449 09:38:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62177 /var/tmp/spdk2.sock 00:06:55.449 09:38:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:55.449 09:38:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 62177 /var/tmp/spdk2.sock 00:06:55.449 09:38:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:55.449 09:38:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:55.449 09:38:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.449 09:38:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:55.449 09:38:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.449 09:38:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 62177 /var/tmp/spdk2.sock 00:06:55.449 09:38:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 62177 ']' 00:06:55.449 09:38:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.449 09:38:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.449 09:38:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.449 09:38:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.449 09:38:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.449 [2024-12-05 09:38:42.855055] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:55.449 [2024-12-05 09:38:42.855215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62177 ] 00:06:55.449 [2024-12-05 09:38:43.030010] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 62148 has claimed it. 00:06:55.449 [2024-12-05 09:38:43.034029] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:56.017 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (62177) - No such process 00:06:56.017 ERROR: process (pid: 62177) is no longer running 00:06:56.017 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.017 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:56.017 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:56.017 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:56.017 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:56.017 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:56.017 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 62148 00:06:56.017 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62148 00:06:56.017 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.585 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 62148 00:06:56.585 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 62148 ']' 00:06:56.585 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 62148 00:06:56.585 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:56.585 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.585 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62148 00:06:56.585 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.585 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.585 killing process with pid 62148 00:06:56.585 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62148' 00:06:56.585 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 62148 00:06:56.585 09:38:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 62148 00:06:58.485 00:06:58.485 real 0m4.130s 00:06:58.485 user 0m4.418s 00:06:58.485 sys 0m0.869s 00:06:58.485 09:38:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.485 09:38:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.485 ************************************ 00:06:58.485 END TEST locking_app_on_locked_coremask 00:06:58.485 ************************************ 00:06:58.485 09:38:45 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:58.485 09:38:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.485 09:38:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.485 09:38:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.485 ************************************ 00:06:58.485 START TEST locking_overlapped_coremask 00:06:58.485 ************************************ 00:06:58.485 09:38:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:58.485 09:38:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62249 00:06:58.485 09:38:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 62249 /var/tmp/spdk.sock 00:06:58.485 09:38:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 62249 ']' 00:06:58.485 09:38:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.485 09:38:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:58.485 09:38:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.485 09:38:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.485 09:38:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.485 09:38:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.485 [2024-12-05 09:38:45.957827] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:58.485 [2024-12-05 09:38:45.958043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62249 ] 00:06:58.742 [2024-12-05 09:38:46.138652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.742 [2024-12-05 09:38:46.243441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.742 [2024-12-05 09:38:46.243560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.742 [2024-12-05 09:38:46.243591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.678 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.678 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:59.678 09:38:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62279 00:06:59.678 09:38:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:59.678 09:38:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62279 /var/tmp/spdk2.sock 00:06:59.678 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:59.678 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 62279 /var/tmp/spdk2.sock 00:06:59.678 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:59.678 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.678 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:59.678 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.678 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 62279 /var/tmp/spdk2.sock 00:06:59.678 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 62279 ']' 00:06:59.678 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.678 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.678 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.678 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.678 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.678 [2024-12-05 09:38:47.169357] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:59.678 [2024-12-05 09:38:47.169558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62279 ] 00:06:59.937 [2024-12-05 09:38:47.365706] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62249 has claimed it. 00:06:59.937 [2024-12-05 09:38:47.365807] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:00.504 ERROR: process (pid: 62279) is no longer running 00:07:00.504 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (62279) - No such process 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 62249 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 62249 ']' 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 62249 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62249 00:07:00.504 killing process with pid 62249 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62249' 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 62249 00:07:00.504 09:38:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 62249 00:07:02.406 00:07:02.406 real 0m3.992s 00:07:02.406 user 0m10.839s 00:07:02.406 sys 0m0.725s 00:07:02.406 ************************************ 00:07:02.406 END TEST locking_overlapped_coremask 00:07:02.406 ************************************ 00:07:02.406 09:38:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.406 09:38:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.406 09:38:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:02.406 09:38:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.406 09:38:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.406 09:38:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.406 ************************************ 00:07:02.406 START TEST locking_overlapped_coremask_via_rpc 00:07:02.406 ************************************ 00:07:02.406 09:38:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:02.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.406 09:38:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=62349 00:07:02.406 09:38:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 62349 /var/tmp/spdk.sock 00:07:02.406 09:38:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:02.406 09:38:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 62349 ']' 00:07:02.406 09:38:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.406 09:38:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.406 09:38:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.406 09:38:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.406 09:38:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.406 [2024-12-05 09:38:50.008315] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:02.406 [2024-12-05 09:38:50.008476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62349 ] 00:07:02.665 [2024-12-05 09:38:50.194599] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:02.665 [2024-12-05 09:38:50.194672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.924 [2024-12-05 09:38:50.349049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.924 [2024-12-05 09:38:50.349162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.924 [2024-12-05 09:38:50.349171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.859 09:38:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.859 09:38:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:03.860 09:38:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:03.860 09:38:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=62379 00:07:03.860 09:38:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 62379 /var/tmp/spdk2.sock 00:07:03.860 09:38:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 62379 ']' 00:07:03.860 09:38:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.860 09:38:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.860 09:38:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.860 09:38:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.860 09:38:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.860 [2024-12-05 09:38:51.443036] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:03.860 [2024-12-05 09:38:51.443659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62379 ] 00:07:04.118 [2024-12-05 09:38:51.648151] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:04.118 [2024-12-05 09:38:51.652128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.377 [2024-12-05 09:38:51.918224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.377 [2024-12-05 09:38:51.918309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.377 [2024-12-05 09:38:51.918323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.282 [2024-12-05 09:38:53.562235] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62349 has claimed it. 00:07:06.282 2024/12/05 09:38:53 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:07:06.282 request: 00:07:06.282 { 00:07:06.282 "method": "framework_enable_cpumask_locks", 00:07:06.282 "params": {} 00:07:06.282 } 00:07:06.282 Got JSON-RPC error response 00:07:06.282 GoRPCClient: error on JSON-RPC call 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 62349 /var/tmp/spdk.sock 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 62349 ']' 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 62379 /var/tmp/spdk2.sock 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 62379 ']' 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.282 09:38:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.850 09:38:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.850 ************************************ 00:07:06.850 END TEST locking_overlapped_coremask_via_rpc 00:07:06.850 ************************************ 00:07:06.850 09:38:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:06.850 09:38:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:06.850 09:38:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:06.850 09:38:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:06.850 09:38:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:06.850 00:07:06.850 real 0m4.344s 00:07:06.850 user 0m1.638s 00:07:06.850 sys 0m0.267s 00:07:06.850 09:38:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.850 09:38:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.850 09:38:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:06.850 09:38:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62349 ]] 00:07:06.850 09:38:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62349 00:07:06.850 09:38:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 62349 ']' 00:07:06.850 09:38:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 62349 00:07:06.850 09:38:54 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:06.850 09:38:54 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.850 09:38:54 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62349 00:07:06.850 09:38:54 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.850 09:38:54 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.850 09:38:54 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62349' 00:07:06.850 killing process with pid 62349 00:07:06.850 09:38:54 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 62349 00:07:06.850 09:38:54 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 62349 00:07:09.402 09:38:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62379 ]] 00:07:09.402 09:38:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62379 00:07:09.402 09:38:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 62379 ']' 00:07:09.402 09:38:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 62379 00:07:09.402 09:38:56 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:09.402 09:38:56 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.402 09:38:56 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62379 00:07:09.402 killing process with pid 62379 00:07:09.403 09:38:56 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:09.403 09:38:56 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:09.403 09:38:56 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62379' 00:07:09.403 09:38:56 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 62379 00:07:09.403 09:38:56 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 62379 00:07:11.302 09:38:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:11.302 09:38:58 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:11.302 09:38:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62349 ]] 00:07:11.302 09:38:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62349 00:07:11.302 09:38:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 62349 ']' 00:07:11.302 09:38:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 62349 00:07:11.302 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (62349) - No such process 00:07:11.302 Process with pid 62349 is not found 00:07:11.302 09:38:58 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 62349 is not found' 00:07:11.302 Process with pid 62379 is not found 00:07:11.302 09:38:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62379 ]] 00:07:11.302 09:38:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62379 00:07:11.302 09:38:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 62379 ']' 00:07:11.302 09:38:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 62379 00:07:11.302 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (62379) - No such process 00:07:11.302 09:38:58 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 62379 is not found' 00:07:11.302 09:38:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:11.302 00:07:11.302 real 0m42.872s 00:07:11.302 user 1m16.140s 00:07:11.302 sys 0m7.379s 00:07:11.302 09:38:58 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.302 09:38:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.302 ************************************ 00:07:11.302 END TEST cpu_locks 00:07:11.302 ************************************ 00:07:11.302 00:07:11.302 real 1m13.861s 00:07:11.302 user 2m18.112s 00:07:11.302 sys 0m11.649s 00:07:11.302 09:38:58 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.302 ************************************ 00:07:11.302 END TEST event 00:07:11.302 ************************************ 00:07:11.302 09:38:58 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.302 09:38:58 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:11.302 09:38:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.302 09:38:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.302 09:38:58 -- common/autotest_common.sh@10 -- # set +x 00:07:11.302 ************************************ 00:07:11.302 START TEST thread 00:07:11.302 ************************************ 00:07:11.302 09:38:58 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:11.560 * Looking for test storage... 00:07:11.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:11.560 09:38:58 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:11.560 09:38:58 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:11.560 09:38:58 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:11.560 09:38:59 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:11.560 09:38:59 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.560 09:38:59 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.560 09:38:59 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.560 09:38:59 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.560 09:38:59 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.560 09:38:59 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.560 09:38:59 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.560 09:38:59 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.560 09:38:59 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.560 09:38:59 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.560 09:38:59 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.560 09:38:59 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:11.560 09:38:59 thread -- scripts/common.sh@345 -- # : 1 00:07:11.560 09:38:59 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.560 09:38:59 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.560 09:38:59 thread -- scripts/common.sh@365 -- # decimal 1 00:07:11.560 09:38:59 thread -- scripts/common.sh@353 -- # local d=1 00:07:11.560 09:38:59 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.560 09:38:59 thread -- scripts/common.sh@355 -- # echo 1 00:07:11.560 09:38:59 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.560 09:38:59 thread -- scripts/common.sh@366 -- # decimal 2 00:07:11.560 09:38:59 thread -- scripts/common.sh@353 -- # local d=2 00:07:11.560 09:38:59 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.560 09:38:59 thread -- scripts/common.sh@355 -- # echo 2 00:07:11.560 09:38:59 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.560 09:38:59 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.560 09:38:59 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.560 09:38:59 thread -- scripts/common.sh@368 -- # return 0 00:07:11.560 09:38:59 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.560 09:38:59 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:11.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.560 --rc genhtml_branch_coverage=1 00:07:11.560 --rc genhtml_function_coverage=1 00:07:11.560 --rc genhtml_legend=1 00:07:11.560 --rc geninfo_all_blocks=1 00:07:11.560 --rc geninfo_unexecuted_blocks=1 00:07:11.560 00:07:11.560 ' 00:07:11.560 09:38:59 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:11.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.560 --rc genhtml_branch_coverage=1 00:07:11.560 --rc genhtml_function_coverage=1 00:07:11.560 --rc genhtml_legend=1 00:07:11.560 --rc geninfo_all_blocks=1 00:07:11.560 --rc geninfo_unexecuted_blocks=1 00:07:11.560 00:07:11.560 ' 00:07:11.560 09:38:59 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:11.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.560 --rc genhtml_branch_coverage=1 00:07:11.560 --rc genhtml_function_coverage=1 00:07:11.560 --rc genhtml_legend=1 00:07:11.560 --rc geninfo_all_blocks=1 00:07:11.561 --rc geninfo_unexecuted_blocks=1 00:07:11.561 00:07:11.561 ' 00:07:11.561 09:38:59 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:11.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.561 --rc genhtml_branch_coverage=1 00:07:11.561 --rc genhtml_function_coverage=1 00:07:11.561 --rc genhtml_legend=1 00:07:11.561 --rc geninfo_all_blocks=1 00:07:11.561 --rc geninfo_unexecuted_blocks=1 00:07:11.561 00:07:11.561 ' 00:07:11.561 09:38:59 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:11.561 09:38:59 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:11.561 09:38:59 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.561 09:38:59 thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.561 ************************************ 00:07:11.561 START TEST thread_poller_perf 00:07:11.561 ************************************ 00:07:11.561 09:38:59 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:11.561 [2024-12-05 09:38:59.096005] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:11.561 [2024-12-05 09:38:59.096413] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62593 ] 00:07:11.818 [2024-12-05 09:38:59.288377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.819 [2024-12-05 09:38:59.442421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.819 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:13.196 [2024-12-05T09:39:00.825Z] ====================================== 00:07:13.196 [2024-12-05T09:39:00.825Z] busy:2214206381 (cyc) 00:07:13.196 [2024-12-05T09:39:00.825Z] total_run_count: 355000 00:07:13.196 [2024-12-05T09:39:00.825Z] tsc_hz: 2200000000 (cyc) 00:07:13.196 [2024-12-05T09:39:00.825Z] ====================================== 00:07:13.196 [2024-12-05T09:39:00.825Z] poller_cost: 6237 (cyc), 2835 (nsec) 00:07:13.196 00:07:13.196 real 0m1.585s 00:07:13.196 user 0m1.369s 00:07:13.196 sys 0m0.107s 00:07:13.196 09:39:00 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.196 09:39:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:13.196 ************************************ 00:07:13.196 END TEST thread_poller_perf 00:07:13.196 ************************************ 00:07:13.196 09:39:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:13.196 09:39:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:13.196 09:39:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.196 09:39:00 thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.196 ************************************ 00:07:13.196 START TEST thread_poller_perf 00:07:13.196 ************************************ 00:07:13.196 09:39:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:13.196 [2024-12-05 09:39:00.731646] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:13.196 [2024-12-05 09:39:00.731801] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62629 ] 00:07:13.453 [2024-12-05 09:39:00.910680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.453 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:13.453 [2024-12-05 09:39:01.022261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.825 [2024-12-05T09:39:02.454Z] ====================================== 00:07:14.825 [2024-12-05T09:39:02.454Z] busy:2203875004 (cyc) 00:07:14.825 [2024-12-05T09:39:02.454Z] total_run_count: 3934000 00:07:14.825 [2024-12-05T09:39:02.454Z] tsc_hz: 2200000000 (cyc) 00:07:14.825 [2024-12-05T09:39:02.454Z] ====================================== 00:07:14.825 [2024-12-05T09:39:02.454Z] poller_cost: 560 (cyc), 254 (nsec) 00:07:14.825 ************************************ 00:07:14.825 END TEST thread_poller_perf 00:07:14.825 ************************************ 00:07:14.825 00:07:14.825 real 0m1.528s 00:07:14.825 user 0m1.321s 00:07:14.825 sys 0m0.099s 00:07:14.825 09:39:02 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.825 09:39:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:14.825 09:39:02 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:14.825 00:07:14.825 real 0m3.396s 00:07:14.825 user 0m2.841s 00:07:14.825 sys 0m0.335s 00:07:14.825 09:39:02 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.825 ************************************ 00:07:14.825 END TEST thread 00:07:14.825 ************************************ 00:07:14.825 09:39:02 thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.825 09:39:02 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:14.825 09:39:02 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:14.825 09:39:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.825 09:39:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.825 09:39:02 -- common/autotest_common.sh@10 -- # set +x 00:07:14.825 ************************************ 00:07:14.825 START TEST app_cmdline 00:07:14.825 ************************************ 00:07:14.825 09:39:02 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:14.825 * Looking for test storage... 00:07:14.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:14.825 09:39:02 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:14.825 09:39:02 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:14.825 09:39:02 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:14.825 09:39:02 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:14.825 09:39:02 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.825 09:39:02 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.825 09:39:02 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.825 09:39:02 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.825 09:39:02 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.825 09:39:02 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.825 09:39:02 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.825 09:39:02 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.825 09:39:02 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.825 09:39:02 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.825 09:39:02 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.825 09:39:02 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:14.825 09:39:02 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:14.825 09:39:02 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.825 09:39:02 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.084 09:39:02 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:15.084 09:39:02 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:15.084 09:39:02 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.084 09:39:02 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:15.084 09:39:02 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.084 09:39:02 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:15.084 09:39:02 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:15.084 09:39:02 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.084 09:39:02 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:15.084 09:39:02 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.084 09:39:02 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.084 09:39:02 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.084 09:39:02 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:15.084 09:39:02 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.084 09:39:02 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:15.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.084 --rc genhtml_branch_coverage=1 00:07:15.084 --rc genhtml_function_coverage=1 00:07:15.084 --rc genhtml_legend=1 00:07:15.084 --rc geninfo_all_blocks=1 00:07:15.084 --rc geninfo_unexecuted_blocks=1 00:07:15.084 00:07:15.084 ' 00:07:15.084 09:39:02 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:15.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.084 --rc genhtml_branch_coverage=1 00:07:15.084 --rc genhtml_function_coverage=1 00:07:15.084 --rc genhtml_legend=1 00:07:15.084 --rc geninfo_all_blocks=1 00:07:15.084 --rc geninfo_unexecuted_blocks=1 00:07:15.084 00:07:15.084 ' 00:07:15.084 09:39:02 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:15.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.084 --rc genhtml_branch_coverage=1 00:07:15.084 --rc genhtml_function_coverage=1 00:07:15.084 --rc genhtml_legend=1 00:07:15.084 --rc geninfo_all_blocks=1 00:07:15.084 --rc geninfo_unexecuted_blocks=1 00:07:15.084 00:07:15.084 ' 00:07:15.084 09:39:02 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:15.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.084 --rc genhtml_branch_coverage=1 00:07:15.084 --rc genhtml_function_coverage=1 00:07:15.084 --rc genhtml_legend=1 00:07:15.084 --rc geninfo_all_blocks=1 00:07:15.084 --rc geninfo_unexecuted_blocks=1 00:07:15.084 00:07:15.084 ' 00:07:15.084 09:39:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:15.084 09:39:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62718 00:07:15.084 09:39:02 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:15.084 09:39:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62718 00:07:15.084 09:39:02 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 62718 ']' 00:07:15.084 09:39:02 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.084 09:39:02 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.084 09:39:02 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.084 09:39:02 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.084 09:39:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:15.084 [2024-12-05 09:39:02.574581] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:15.084 [2024-12-05 09:39:02.575053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62718 ] 00:07:15.343 [2024-12-05 09:39:02.743761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.343 [2024-12-05 09:39:02.851539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.278 09:39:03 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.278 09:39:03 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:16.278 09:39:03 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:16.278 { 00:07:16.278 "fields": { 00:07:16.278 "commit": "8d3947977", 00:07:16.278 "major": 25, 00:07:16.278 "minor": 1, 00:07:16.278 "patch": 0, 00:07:16.278 "suffix": "-pre" 00:07:16.278 }, 00:07:16.278 "version": "SPDK v25.01-pre git sha1 8d3947977" 00:07:16.278 } 00:07:16.278 09:39:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:16.278 09:39:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:16.278 09:39:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:16.278 09:39:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:16.278 09:39:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:16.278 09:39:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:16.278 09:39:03 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.278 09:39:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:16.278 09:39:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:16.537 09:39:03 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.537 09:39:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:16.537 09:39:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:16.537 09:39:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.537 09:39:03 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:16.537 09:39:03 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.537 09:39:03 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.537 09:39:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.537 09:39:03 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.537 09:39:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.537 09:39:03 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.537 09:39:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.537 09:39:03 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.537 09:39:03 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:16.537 09:39:03 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.797 2024/12/05 09:39:04 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:16.797 request: 00:07:16.797 { 00:07:16.797 "method": "env_dpdk_get_mem_stats", 00:07:16.797 "params": {} 00:07:16.797 } 00:07:16.797 Got JSON-RPC error response 00:07:16.797 GoRPCClient: error on JSON-RPC call 00:07:16.797 09:39:04 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:16.797 09:39:04 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:16.797 09:39:04 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:16.797 09:39:04 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:16.797 09:39:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62718 00:07:16.797 09:39:04 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 62718 ']' 00:07:16.797 09:39:04 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 62718 00:07:16.797 09:39:04 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:16.797 09:39:04 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.797 09:39:04 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62718 00:07:16.797 09:39:04 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.797 killing process with pid 62718 00:07:16.797 09:39:04 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.797 09:39:04 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62718' 00:07:16.797 09:39:04 app_cmdline -- common/autotest_common.sh@973 -- # kill 62718 00:07:16.797 09:39:04 app_cmdline -- common/autotest_common.sh@978 -- # wait 62718 00:07:18.699 00:07:18.699 real 0m3.800s 00:07:18.699 user 0m4.186s 00:07:18.699 sys 0m0.643s 00:07:18.699 09:39:06 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.699 09:39:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:18.699 ************************************ 00:07:18.699 END TEST app_cmdline 00:07:18.699 ************************************ 00:07:18.699 09:39:06 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:18.699 09:39:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.699 09:39:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.699 09:39:06 -- common/autotest_common.sh@10 -- # set +x 00:07:18.699 ************************************ 00:07:18.699 START TEST version 00:07:18.699 ************************************ 00:07:18.699 09:39:06 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:18.699 * Looking for test storage... 00:07:18.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:18.699 09:39:06 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:18.699 09:39:06 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:18.699 09:39:06 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:18.699 09:39:06 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:18.699 09:39:06 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.699 09:39:06 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.699 09:39:06 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.699 09:39:06 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.699 09:39:06 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.699 09:39:06 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.699 09:39:06 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.699 09:39:06 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.699 09:39:06 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.699 09:39:06 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.699 09:39:06 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.699 09:39:06 version -- scripts/common.sh@344 -- # case "$op" in 00:07:18.699 09:39:06 version -- scripts/common.sh@345 -- # : 1 00:07:18.699 09:39:06 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.699 09:39:06 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.699 09:39:06 version -- scripts/common.sh@365 -- # decimal 1 00:07:18.699 09:39:06 version -- scripts/common.sh@353 -- # local d=1 00:07:18.699 09:39:06 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.699 09:39:06 version -- scripts/common.sh@355 -- # echo 1 00:07:18.958 09:39:06 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.958 09:39:06 version -- scripts/common.sh@366 -- # decimal 2 00:07:18.958 09:39:06 version -- scripts/common.sh@353 -- # local d=2 00:07:18.958 09:39:06 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.958 09:39:06 version -- scripts/common.sh@355 -- # echo 2 00:07:18.958 09:39:06 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.958 09:39:06 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.958 09:39:06 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.958 09:39:06 version -- scripts/common.sh@368 -- # return 0 00:07:18.958 09:39:06 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.958 09:39:06 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:18.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.958 --rc genhtml_branch_coverage=1 00:07:18.958 --rc genhtml_function_coverage=1 00:07:18.958 --rc genhtml_legend=1 00:07:18.958 --rc geninfo_all_blocks=1 00:07:18.958 --rc geninfo_unexecuted_blocks=1 00:07:18.958 00:07:18.958 ' 00:07:18.958 09:39:06 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:18.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.958 --rc genhtml_branch_coverage=1 00:07:18.958 --rc genhtml_function_coverage=1 00:07:18.958 --rc genhtml_legend=1 00:07:18.958 --rc geninfo_all_blocks=1 00:07:18.958 --rc geninfo_unexecuted_blocks=1 00:07:18.958 00:07:18.958 ' 00:07:18.958 09:39:06 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:18.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.958 --rc genhtml_branch_coverage=1 00:07:18.958 --rc genhtml_function_coverage=1 00:07:18.958 --rc genhtml_legend=1 00:07:18.958 --rc geninfo_all_blocks=1 00:07:18.958 --rc geninfo_unexecuted_blocks=1 00:07:18.958 00:07:18.958 ' 00:07:18.958 09:39:06 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:18.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.958 --rc genhtml_branch_coverage=1 00:07:18.958 --rc genhtml_function_coverage=1 00:07:18.958 --rc genhtml_legend=1 00:07:18.958 --rc geninfo_all_blocks=1 00:07:18.958 --rc geninfo_unexecuted_blocks=1 00:07:18.958 00:07:18.958 ' 00:07:18.958 09:39:06 version -- app/version.sh@17 -- # get_header_version major 00:07:18.958 09:39:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:18.958 09:39:06 version -- app/version.sh@14 -- # cut -f2 00:07:18.958 09:39:06 version -- app/version.sh@14 -- # tr -d '"' 00:07:18.958 09:39:06 version -- app/version.sh@17 -- # major=25 00:07:18.958 09:39:06 version -- app/version.sh@18 -- # get_header_version minor 00:07:18.958 09:39:06 version -- app/version.sh@14 -- # cut -f2 00:07:18.958 09:39:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:18.958 09:39:06 version -- app/version.sh@14 -- # tr -d '"' 00:07:18.958 09:39:06 version -- app/version.sh@18 -- # minor=1 00:07:18.958 09:39:06 version -- app/version.sh@19 -- # get_header_version patch 00:07:18.958 09:39:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:18.958 09:39:06 version -- app/version.sh@14 -- # cut -f2 00:07:18.958 09:39:06 version -- app/version.sh@14 -- # tr -d '"' 00:07:18.958 09:39:06 version -- app/version.sh@19 -- # patch=0 00:07:18.958 09:39:06 version -- app/version.sh@20 -- # get_header_version suffix 00:07:18.958 09:39:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:18.958 09:39:06 version -- app/version.sh@14 -- # tr -d '"' 00:07:18.958 09:39:06 version -- app/version.sh@14 -- # cut -f2 00:07:18.958 09:39:06 version -- app/version.sh@20 -- # suffix=-pre 00:07:18.958 09:39:06 version -- app/version.sh@22 -- # version=25.1 00:07:18.958 09:39:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:18.958 09:39:06 version -- app/version.sh@28 -- # version=25.1rc0 00:07:18.959 09:39:06 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:18.959 09:39:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:18.959 09:39:06 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:18.959 09:39:06 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:18.959 00:07:18.959 real 0m0.243s 00:07:18.959 user 0m0.154s 00:07:18.959 sys 0m0.127s 00:07:18.959 09:39:06 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.959 09:39:06 version -- common/autotest_common.sh@10 -- # set +x 00:07:18.959 ************************************ 00:07:18.959 END TEST version 00:07:18.959 ************************************ 00:07:18.959 09:39:06 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:18.959 09:39:06 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:18.959 09:39:06 -- spdk/autotest.sh@194 -- # uname -s 00:07:18.959 09:39:06 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:18.959 09:39:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:18.959 09:39:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:18.959 09:39:06 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:18.959 09:39:06 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:18.959 09:39:06 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:18.959 09:39:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:18.959 09:39:06 -- common/autotest_common.sh@10 -- # set +x 00:07:18.959 09:39:06 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:18.959 09:39:06 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:18.959 09:39:06 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:18.959 09:39:06 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:18.959 09:39:06 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:18.959 09:39:06 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:18.959 09:39:06 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:18.959 09:39:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:18.959 09:39:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.959 09:39:06 -- common/autotest_common.sh@10 -- # set +x 00:07:18.959 ************************************ 00:07:18.959 START TEST nvmf_tcp 00:07:18.959 ************************************ 00:07:18.959 09:39:06 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:18.959 * Looking for test storage... 00:07:18.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:18.959 09:39:06 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:18.959 09:39:06 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:18.959 09:39:06 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:19.218 09:39:06 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.218 09:39:06 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:19.218 09:39:06 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.218 09:39:06 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:19.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.218 --rc genhtml_branch_coverage=1 00:07:19.218 --rc genhtml_function_coverage=1 00:07:19.218 --rc genhtml_legend=1 00:07:19.218 --rc geninfo_all_blocks=1 00:07:19.218 --rc geninfo_unexecuted_blocks=1 00:07:19.218 00:07:19.218 ' 00:07:19.218 09:39:06 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:19.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.218 --rc genhtml_branch_coverage=1 00:07:19.218 --rc genhtml_function_coverage=1 00:07:19.218 --rc genhtml_legend=1 00:07:19.218 --rc geninfo_all_blocks=1 00:07:19.218 --rc geninfo_unexecuted_blocks=1 00:07:19.218 00:07:19.218 ' 00:07:19.218 09:39:06 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:19.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.218 --rc genhtml_branch_coverage=1 00:07:19.218 --rc genhtml_function_coverage=1 00:07:19.218 --rc genhtml_legend=1 00:07:19.218 --rc geninfo_all_blocks=1 00:07:19.218 --rc geninfo_unexecuted_blocks=1 00:07:19.218 00:07:19.218 ' 00:07:19.218 09:39:06 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:19.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.218 --rc genhtml_branch_coverage=1 00:07:19.218 --rc genhtml_function_coverage=1 00:07:19.218 --rc genhtml_legend=1 00:07:19.218 --rc geninfo_all_blocks=1 00:07:19.218 --rc geninfo_unexecuted_blocks=1 00:07:19.218 00:07:19.218 ' 00:07:19.218 09:39:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:19.218 09:39:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:19.218 09:39:06 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:19.218 09:39:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:19.218 09:39:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.218 09:39:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:19.218 ************************************ 00:07:19.218 START TEST nvmf_target_core 00:07:19.218 ************************************ 00:07:19.218 09:39:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:19.218 * Looking for test storage... 00:07:19.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:19.218 09:39:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:19.218 09:39:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:07:19.218 09:39:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:19.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.478 --rc genhtml_branch_coverage=1 00:07:19.478 --rc genhtml_function_coverage=1 00:07:19.478 --rc genhtml_legend=1 00:07:19.478 --rc geninfo_all_blocks=1 00:07:19.478 --rc geninfo_unexecuted_blocks=1 00:07:19.478 00:07:19.478 ' 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:19.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.478 --rc genhtml_branch_coverage=1 00:07:19.478 --rc genhtml_function_coverage=1 00:07:19.478 --rc genhtml_legend=1 00:07:19.478 --rc geninfo_all_blocks=1 00:07:19.478 --rc geninfo_unexecuted_blocks=1 00:07:19.478 00:07:19.478 ' 00:07:19.478 09:39:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:19.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.478 --rc genhtml_branch_coverage=1 00:07:19.478 --rc genhtml_function_coverage=1 00:07:19.478 --rc genhtml_legend=1 00:07:19.478 --rc geninfo_all_blocks=1 00:07:19.478 --rc geninfo_unexecuted_blocks=1 00:07:19.478 00:07:19.478 ' 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:19.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.479 --rc genhtml_branch_coverage=1 00:07:19.479 --rc genhtml_function_coverage=1 00:07:19.479 --rc genhtml_legend=1 00:07:19.479 --rc geninfo_all_blocks=1 00:07:19.479 --rc geninfo_unexecuted_blocks=1 00:07:19.479 00:07:19.479 ' 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:19.479 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:19.479 ************************************ 00:07:19.479 START TEST nvmf_abort 00:07:19.479 ************************************ 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:19.479 * Looking for test storage... 00:07:19.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:07:19.479 09:39:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:19.479 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:19.479 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.479 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.479 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.479 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.479 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.479 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.479 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.479 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.479 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.479 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.479 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.479 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:19.479 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:19.479 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.479 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.480 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:19.480 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:19.480 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.480 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:19.480 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.480 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:19.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.742 --rc genhtml_branch_coverage=1 00:07:19.742 --rc genhtml_function_coverage=1 00:07:19.742 --rc genhtml_legend=1 00:07:19.742 --rc geninfo_all_blocks=1 00:07:19.742 --rc geninfo_unexecuted_blocks=1 00:07:19.742 00:07:19.742 ' 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:19.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.742 --rc genhtml_branch_coverage=1 00:07:19.742 --rc genhtml_function_coverage=1 00:07:19.742 --rc genhtml_legend=1 00:07:19.742 --rc geninfo_all_blocks=1 00:07:19.742 --rc geninfo_unexecuted_blocks=1 00:07:19.742 00:07:19.742 ' 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:19.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.742 --rc genhtml_branch_coverage=1 00:07:19.742 --rc genhtml_function_coverage=1 00:07:19.742 --rc genhtml_legend=1 00:07:19.742 --rc geninfo_all_blocks=1 00:07:19.742 --rc geninfo_unexecuted_blocks=1 00:07:19.742 00:07:19.742 ' 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:19.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.742 --rc genhtml_branch_coverage=1 00:07:19.742 --rc genhtml_function_coverage=1 00:07:19.742 --rc genhtml_legend=1 00:07:19.742 --rc geninfo_all_blocks=1 00:07:19.742 --rc geninfo_unexecuted_blocks=1 00:07:19.742 00:07:19.742 ' 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.742 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:19.743 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:19.743 Cannot find device "nvmf_init_br" 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:19.743 Cannot find device "nvmf_init_br2" 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:19.743 Cannot find device "nvmf_tgt_br" 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:19.743 Cannot find device "nvmf_tgt_br2" 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:19.743 Cannot find device "nvmf_init_br" 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:19.743 Cannot find device "nvmf_init_br2" 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:19.743 Cannot find device "nvmf_tgt_br" 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:19.743 Cannot find device "nvmf_tgt_br2" 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:19.743 Cannot find device "nvmf_br" 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:19.743 Cannot find device "nvmf_init_if" 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:19.743 Cannot find device "nvmf_init_if2" 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:19.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:19.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:19.743 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:19.744 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:19.744 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:19.744 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:20.003 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:20.003 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:07:20.003 00:07:20.003 --- 10.0.0.3 ping statistics --- 00:07:20.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.003 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:07:20.003 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:20.261 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:20.261 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:07:20.261 00:07:20.261 --- 10.0.0.4 ping statistics --- 00:07:20.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.261 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:20.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:20.261 00:07:20.261 --- 10.0.0.1 ping statistics --- 00:07:20.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.261 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:20.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:07:20.261 00:07:20.261 --- 10.0.0.2 ping statistics --- 00:07:20.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.261 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=63179 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 63179 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 63179 ']' 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.261 09:39:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:20.261 [2024-12-05 09:39:07.773412] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:20.261 [2024-12-05 09:39:07.773589] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.520 [2024-12-05 09:39:07.961513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.520 [2024-12-05 09:39:08.125214] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.520 [2024-12-05 09:39:08.125288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.520 [2024-12-05 09:39:08.125314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.520 [2024-12-05 09:39:08.125329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.520 [2024-12-05 09:39:08.125349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.520 [2024-12-05 09:39:08.127871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.520 [2024-12-05 09:39:08.127992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.520 [2024-12-05 09:39:08.128151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.455 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.455 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:21.455 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:21.455 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:21.455 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.455 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.455 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:21.455 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.455 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.455 [2024-12-05 09:39:08.862683] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.455 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.455 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:21.455 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.455 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.455 Malloc0 00:07:21.455 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.455 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:21.455 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.456 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.456 Delay0 00:07:21.456 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.456 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:21.456 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.456 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.456 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.456 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:21.456 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.456 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.456 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.456 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:21.456 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.456 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.456 [2024-12-05 09:39:08.980831] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:21.456 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.456 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:21.456 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.456 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.456 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.456 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:21.714 [2024-12-05 09:39:09.233499] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:24.247 Initializing NVMe Controllers 00:07:24.247 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:24.247 controller IO queue size 128 less than required 00:07:24.247 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:24.247 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:24.247 Initialization complete. Launching workers. 00:07:24.247 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 26254 00:07:24.247 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26311, failed to submit 66 00:07:24.248 success 26254, unsuccessful 57, failed 0 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:24.248 rmmod nvme_tcp 00:07:24.248 rmmod nvme_fabrics 00:07:24.248 rmmod nvme_keyring 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 63179 ']' 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 63179 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 63179 ']' 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 63179 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63179 00:07:24.248 killing process with pid 63179 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63179' 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 63179 00:07:24.248 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 63179 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:07:25.214 00:07:25.214 real 0m5.842s 00:07:25.214 user 0m15.093s 00:07:25.214 sys 0m1.338s 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.214 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.214 ************************************ 00:07:25.214 END TEST nvmf_abort 00:07:25.214 ************************************ 00:07:25.215 09:39:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:25.215 09:39:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:25.215 09:39:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.215 09:39:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:25.215 ************************************ 00:07:25.215 START TEST nvmf_ns_hotplug_stress 00:07:25.215 ************************************ 00:07:25.215 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:25.474 * Looking for test storage... 00:07:25.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:25.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.475 --rc genhtml_branch_coverage=1 00:07:25.475 --rc genhtml_function_coverage=1 00:07:25.475 --rc genhtml_legend=1 00:07:25.475 --rc geninfo_all_blocks=1 00:07:25.475 --rc geninfo_unexecuted_blocks=1 00:07:25.475 00:07:25.475 ' 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:25.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.475 --rc genhtml_branch_coverage=1 00:07:25.475 --rc genhtml_function_coverage=1 00:07:25.475 --rc genhtml_legend=1 00:07:25.475 --rc geninfo_all_blocks=1 00:07:25.475 --rc geninfo_unexecuted_blocks=1 00:07:25.475 00:07:25.475 ' 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:25.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.475 --rc genhtml_branch_coverage=1 00:07:25.475 --rc genhtml_function_coverage=1 00:07:25.475 --rc genhtml_legend=1 00:07:25.475 --rc geninfo_all_blocks=1 00:07:25.475 --rc geninfo_unexecuted_blocks=1 00:07:25.475 00:07:25.475 ' 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:25.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.475 --rc genhtml_branch_coverage=1 00:07:25.475 --rc genhtml_function_coverage=1 00:07:25.475 --rc genhtml_legend=1 00:07:25.475 --rc geninfo_all_blocks=1 00:07:25.475 --rc geninfo_unexecuted_blocks=1 00:07:25.475 00:07:25.475 ' 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.475 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.475 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:07:25.475 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:07:25.475 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.475 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.475 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:25.475 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.475 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:25.475 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:25.475 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.475 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.475 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.475 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.475 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.475 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.475 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:25.476 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:25.476 Cannot find device "nvmf_init_br" 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:25.476 Cannot find device "nvmf_init_br2" 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:25.476 Cannot find device "nvmf_tgt_br" 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:25.476 Cannot find device "nvmf_tgt_br2" 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:25.476 Cannot find device "nvmf_init_br" 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:25.476 Cannot find device "nvmf_init_br2" 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:07:25.476 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:25.735 Cannot find device "nvmf_tgt_br" 00:07:25.735 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:07:25.735 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:25.735 Cannot find device "nvmf_tgt_br2" 00:07:25.735 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:07:25.735 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:25.735 Cannot find device "nvmf_br" 00:07:25.735 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:07:25.735 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:25.735 Cannot find device "nvmf_init_if" 00:07:25.735 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:07:25.735 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:25.736 Cannot find device "nvmf_init_if2" 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:25.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:25.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:25.736 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:25.995 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:25.995 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:25.995 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:25.995 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:25.995 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:25.995 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:25.995 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:25.995 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:25.995 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:25.995 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:25.995 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:07:25.995 00:07:25.995 --- 10.0.0.3 ping statistics --- 00:07:25.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.995 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:07:25.995 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:25.995 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:25.995 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:07:25.995 00:07:25.995 --- 10.0.0.4 ping statistics --- 00:07:25.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.995 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:07:25.995 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:25.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:07:25.995 00:07:25.995 --- 10.0.0.1 ping statistics --- 00:07:25.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.995 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:07:25.995 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:25.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 00:07:25.995 00:07:25.995 --- 10.0.0.2 ping statistics --- 00:07:25.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.996 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=63512 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 63512 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 63512 ']' 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.996 09:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:25.996 [2024-12-05 09:39:13.589760] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:25.996 [2024-12-05 09:39:13.589972] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.254 [2024-12-05 09:39:13.790192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:26.513 [2024-12-05 09:39:13.947557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.513 [2024-12-05 09:39:13.947643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.513 [2024-12-05 09:39:13.947670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.513 [2024-12-05 09:39:13.947686] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.513 [2024-12-05 09:39:13.947707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.513 [2024-12-05 09:39:13.950343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.513 [2024-12-05 09:39:13.950460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.513 [2024-12-05 09:39:13.950461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:27.080 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:27.644 [2024-12-05 09:39:14.981922] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.645 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:27.901 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:28.157 [2024-12-05 09:39:15.767079] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:28.414 09:39:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:28.671 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:28.928 Malloc0 00:07:28.928 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:29.493 Delay0 00:07:29.493 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.749 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:30.007 NULL1 00:07:30.007 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:30.266 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=63654 00:07:30.266 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:30.266 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:30.266 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.644 Read completed with error (sct=0, sc=11) 00:07:31.644 09:39:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.902 09:39:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:31.902 09:39:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:32.161 true 00:07:32.161 09:39:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:32.161 09:39:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.098 09:39:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.357 09:39:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:33.357 09:39:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:33.616 true 00:07:33.616 09:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:33.616 09:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.876 09:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.136 09:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:34.136 09:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:34.394 true 00:07:34.394 09:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:34.394 09:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.653 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.653 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:34.653 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:34.912 true 00:07:34.912 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:34.912 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.849 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.109 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:36.109 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:36.368 true 00:07:36.368 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:36.368 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.627 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.886 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:36.886 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:37.145 true 00:07:37.145 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:37.145 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.404 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.663 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:37.663 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:37.921 true 00:07:37.921 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:37.921 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.854 09:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.420 09:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:39.420 09:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:39.420 true 00:07:39.684 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:39.684 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.684 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.961 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:39.961 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:40.239 true 00:07:40.504 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:40.504 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.504 09:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.762 09:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:40.762 09:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:41.019 true 00:07:41.019 09:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:41.019 09:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.952 09:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.211 09:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:42.211 09:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:42.470 true 00:07:42.470 09:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:42.470 09:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.728 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.987 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:42.987 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:43.246 true 00:07:43.246 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:43.246 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.504 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.763 09:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:43.763 09:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:44.021 true 00:07:44.021 09:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:44.021 09:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.957 09:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.217 09:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:45.217 09:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:45.475 true 00:07:45.475 09:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:45.475 09:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.413 09:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.671 09:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:46.671 09:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:46.671 true 00:07:46.930 09:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:46.930 09:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.188 09:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.448 09:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:47.448 09:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:47.448 true 00:07:47.706 09:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:47.706 09:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.706 09:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.963 09:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:47.963 09:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:48.528 true 00:07:48.528 09:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:48.528 09:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.465 09:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.465 09:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:49.465 09:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:49.723 true 00:07:49.723 09:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:49.723 09:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.981 09:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.240 09:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:50.240 09:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:50.499 true 00:07:50.500 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:50.500 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.758 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.017 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:51.017 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:51.275 true 00:07:51.276 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:51.276 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.470 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.729 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:52.729 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:52.990 true 00:07:52.990 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:52.990 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.586 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.845 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:53.845 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:54.104 true 00:07:54.104 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:54.104 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.363 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.621 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:54.622 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:54.881 true 00:07:54.881 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:54.881 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.818 09:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.818 09:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:55.818 09:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:56.076 true 00:07:56.076 09:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:56.076 09:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.334 09:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.594 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:56.594 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:57.161 true 00:07:57.161 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:57.161 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.161 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.420 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:57.420 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:57.678 true 00:07:57.678 09:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:57.678 09:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.613 09:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.871 09:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:58.871 09:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:59.130 true 00:07:59.130 09:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:59.130 09:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.388 09:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.646 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:59.646 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:59.904 true 00:07:59.904 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:07:59.904 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.841 Initializing NVMe Controllers 00:08:00.841 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:00.841 Controller IO queue size 128, less than required. 00:08:00.841 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:00.841 Controller IO queue size 128, less than required. 00:08:00.841 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:00.841 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:00.841 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:00.841 Initialization complete. Launching workers. 00:08:00.841 ======================================================== 00:08:00.841 Latency(us) 00:08:00.841 Device Information : IOPS MiB/s Average min max 00:08:00.841 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 614.67 0.30 101273.55 3715.56 1086549.39 00:08:00.841 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8955.27 4.37 14292.59 3274.87 572186.56 00:08:00.841 ======================================================== 00:08:00.841 Total : 9569.93 4.67 19879.29 3274.87 1086549.39 00:08:00.841 00:08:00.841 09:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.841 09:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:00.841 09:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:01.103 true 00:08:01.103 09:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 63654 00:08:01.103 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (63654) - No such process 00:08:01.103 09:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 63654 00:08:01.103 09:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.362 09:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.620 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:01.620 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:01.620 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:01.620 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:01.620 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:01.879 null0 00:08:01.879 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:01.879 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:01.879 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:02.138 null1 00:08:02.138 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.138 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.138 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:02.396 null2 00:08:02.396 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.396 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.396 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:02.396 null3 00:08:02.655 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.655 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.655 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:02.655 null4 00:08:02.655 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.655 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.655 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:02.913 null5 00:08:02.914 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.914 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.914 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:03.172 null6 00:08:03.432 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.432 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.432 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:03.432 null7 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.432 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.433 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.692 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.692 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.692 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.692 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 64699 64701 64702 64704 64706 64707 64710 64712 00:08:03.692 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:03.692 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:03.692 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.692 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.692 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.692 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:03.692 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:03.692 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.692 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.692 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.951 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.951 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.951 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.951 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.951 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.951 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.951 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.951 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.210 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.469 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.469 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.469 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.469 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.469 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.469 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.469 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.727 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.727 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.727 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.727 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.727 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.727 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.727 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.727 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.727 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.727 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.727 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.727 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.727 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.727 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.727 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.727 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.727 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.727 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.727 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.986 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.986 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.986 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.986 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.986 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.986 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.986 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.986 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.986 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.986 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.244 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.244 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.244 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.244 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.244 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.244 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.244 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.244 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.244 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.244 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.502 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.502 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.502 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.502 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.502 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.502 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.502 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.502 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.502 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.502 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.502 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.502 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.502 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.502 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.502 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.502 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.502 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.502 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.502 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.502 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.760 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.760 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.760 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.760 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.760 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.760 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.760 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.760 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.760 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.760 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.760 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.760 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.019 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.019 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.019 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.019 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.019 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.019 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.019 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.019 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.019 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.019 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.019 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.019 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.019 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.019 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.019 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.019 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.019 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.019 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.278 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.278 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.278 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.278 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.278 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.278 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.278 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.278 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.278 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.278 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.278 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.536 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.536 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.536 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.536 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.536 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.536 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.536 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.536 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.536 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.536 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.536 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.536 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.536 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.536 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.536 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.794 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.794 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.794 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.794 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.794 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.794 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.794 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.794 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.794 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.794 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.794 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.794 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.052 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.311 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.311 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.311 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.311 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.311 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.311 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.311 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.311 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.311 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.311 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.570 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.570 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.570 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.570 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.570 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.570 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.570 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.570 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.570 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.570 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.570 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.570 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.570 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.828 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.828 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.828 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.828 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.828 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.828 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:07.828 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.828 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.828 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.828 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.828 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.828 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.828 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.828 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.828 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.828 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.828 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.828 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:08.086 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:08.086 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:08.086 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.086 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:08.086 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.086 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.086 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:08.086 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.086 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.086 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.086 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:08.343 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.343 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.343 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:08.343 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.343 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.344 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:08.344 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.344 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.344 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:08.344 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.344 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.344 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:08.344 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.344 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.344 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:08.344 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.601 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.601 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.601 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:08.601 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:08.601 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:08.601 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:08.601 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:08.601 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.601 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.601 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.601 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:08.859 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:08.859 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.859 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.859 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.859 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:08.859 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.859 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.859 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:08.859 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.859 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.859 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:08.859 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.859 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.859 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:08.859 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.859 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.859 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:09.117 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.117 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.117 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.117 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:09.117 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.117 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.117 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:09.117 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:09.117 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:09.117 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:09.117 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:09.375 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:09.375 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:09.375 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.375 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.375 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:09.375 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.375 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.375 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.375 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.375 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.375 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.375 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.375 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.375 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.375 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:09.634 rmmod nvme_tcp 00:08:09.634 rmmod nvme_fabrics 00:08:09.634 rmmod nvme_keyring 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 63512 ']' 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 63512 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 63512 ']' 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 63512 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63512 00:08:09.634 killing process with pid 63512 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63512' 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 63512 00:08:09.634 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 63512 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.010 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:08:11.269 00:08:11.269 real 0m45.850s 00:08:11.269 user 3m38.489s 00:08:11.269 sys 0m12.482s 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:11.269 ************************************ 00:08:11.269 END TEST nvmf_ns_hotplug_stress 00:08:11.269 ************************************ 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:11.269 ************************************ 00:08:11.269 START TEST nvmf_delete_subsystem 00:08:11.269 ************************************ 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:11.269 * Looking for test storage... 00:08:11.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.269 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.528 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:11.528 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:11.528 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.528 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:11.528 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.528 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:11.528 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:11.528 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.528 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:11.528 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.528 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.528 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.528 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:11.528 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.528 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:11.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.528 --rc genhtml_branch_coverage=1 00:08:11.528 --rc genhtml_function_coverage=1 00:08:11.528 --rc genhtml_legend=1 00:08:11.528 --rc geninfo_all_blocks=1 00:08:11.528 --rc geninfo_unexecuted_blocks=1 00:08:11.528 00:08:11.528 ' 00:08:11.528 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:11.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.529 --rc genhtml_branch_coverage=1 00:08:11.529 --rc genhtml_function_coverage=1 00:08:11.529 --rc genhtml_legend=1 00:08:11.529 --rc geninfo_all_blocks=1 00:08:11.529 --rc geninfo_unexecuted_blocks=1 00:08:11.529 00:08:11.529 ' 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:11.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.529 --rc genhtml_branch_coverage=1 00:08:11.529 --rc genhtml_function_coverage=1 00:08:11.529 --rc genhtml_legend=1 00:08:11.529 --rc geninfo_all_blocks=1 00:08:11.529 --rc geninfo_unexecuted_blocks=1 00:08:11.529 00:08:11.529 ' 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:11.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.529 --rc genhtml_branch_coverage=1 00:08:11.529 --rc genhtml_function_coverage=1 00:08:11.529 --rc genhtml_legend=1 00:08:11.529 --rc geninfo_all_blocks=1 00:08:11.529 --rc geninfo_unexecuted_blocks=1 00:08:11.529 00:08:11.529 ' 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:11.529 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:11.529 Cannot find device "nvmf_init_br" 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:11.529 Cannot find device "nvmf_init_br2" 00:08:11.529 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:08:11.530 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:11.530 Cannot find device "nvmf_tgt_br" 00:08:11.530 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:08:11.530 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:11.530 Cannot find device "nvmf_tgt_br2" 00:08:11.530 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:08:11.530 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:11.530 Cannot find device "nvmf_init_br" 00:08:11.530 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:11.530 Cannot find device "nvmf_init_br2" 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:11.530 Cannot find device "nvmf_tgt_br" 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:11.530 Cannot find device "nvmf_tgt_br2" 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:11.530 Cannot find device "nvmf_br" 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:11.530 Cannot find device "nvmf_init_if" 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:11.530 Cannot find device "nvmf_init_if2" 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:11.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:11.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:11.530 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:11.788 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:11.788 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:11.788 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:11.789 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:11.789 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:08:11.789 00:08:11.789 --- 10.0.0.3 ping statistics --- 00:08:11.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.789 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:11.789 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:11.789 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:08:11.789 00:08:11.789 --- 10.0.0.4 ping statistics --- 00:08:11.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.789 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:11.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:08:11.789 00:08:11.789 --- 10.0.0.1 ping statistics --- 00:08:11.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.789 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:11.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:08:11.789 00:08:11.789 --- 10.0.0.2 ping statistics --- 00:08:11.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.789 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=66089 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 66089 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 66089 ']' 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.789 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.047 [2024-12-05 09:39:59.466163] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:12.047 [2024-12-05 09:39:59.466362] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.047 [2024-12-05 09:39:59.662143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:12.306 [2024-12-05 09:39:59.812979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.306 [2024-12-05 09:39:59.813072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.306 [2024-12-05 09:39:59.813106] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.306 [2024-12-05 09:39:59.813138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.306 [2024-12-05 09:39:59.813158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.306 [2024-12-05 09:39:59.815775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.306 [2024-12-05 09:39:59.815781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.873 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.873 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:12.873 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:12.873 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:12.873 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.133 [2024-12-05 09:40:00.527223] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.133 [2024-12-05 09:40:00.549115] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.133 NULL1 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.133 Delay0 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=66145 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:13.133 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:13.392 [2024-12-05 09:40:00.839846] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:15.294 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:15.294 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.294 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 starting I/O failed: -6 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Write completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 starting I/O failed: -6 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Write completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 starting I/O failed: -6 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 starting I/O failed: -6 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Write completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Write completed with error (sct=0, sc=8) 00:08:15.294 starting I/O failed: -6 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Write completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 starting I/O failed: -6 00:08:15.294 Write completed with error (sct=0, sc=8) 00:08:15.294 Write completed with error (sct=0, sc=8) 00:08:15.294 Write completed with error (sct=0, sc=8) 00:08:15.294 Write completed with error (sct=0, sc=8) 00:08:15.294 starting I/O failed: -6 00:08:15.294 Write completed with error (sct=0, sc=8) 00:08:15.294 Write completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 starting I/O failed: -6 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Write completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Write completed with error (sct=0, sc=8) 00:08:15.294 starting I/O failed: -6 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 starting I/O failed: -6 00:08:15.294 Write completed with error (sct=0, sc=8) 00:08:15.294 Write completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Write completed with error (sct=0, sc=8) 00:08:15.294 starting I/O failed: -6 00:08:15.294 Write completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.294 Read completed with error (sct=0, sc=8) 00:08:15.295 [2024-12-05 09:40:02.898652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000fd00 is same with the state(6) to be set 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 starting I/O failed: -6 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 starting I/O failed: -6 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 starting I/O failed: -6 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 starting I/O failed: -6 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 starting I/O failed: -6 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 starting I/O failed: -6 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 starting I/O failed: -6 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 starting I/O failed: -6 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 starting I/O failed: -6 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 starting I/O failed: -6 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 starting I/O failed: -6 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 [2024-12-05 09:40:02.900635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000ff80 is same with the state(6) to be set 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 [2024-12-05 09:40:02.901535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Write completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:15.295 Read completed with error (sct=0, sc=8) 00:08:16.671 [2024-12-05 09:40:03.859894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000f300 is same with the state(6) to be set 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 [2024-12-05 09:40:03.897953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000010200 is same with the state(6) to be set 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 [2024-12-05 09:40:03.898298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000010700 is same with the state(6) to be set 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 [2024-12-05 09:40:03.900009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000fa80 is same with the state(6) to be set 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Write completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Initializing NVMe Controllers 00:08:16.671 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:16.671 Controller IO queue size 128, less than required. 00:08:16.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:16.671 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:16.671 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:16.671 Initialization complete. Launching workers. 00:08:16.671 ======================================================== 00:08:16.671 Latency(us) 00:08:16.671 Device Information : IOPS MiB/s Average min max 00:08:16.671 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.89 0.08 892375.90 800.77 1022879.04 00:08:16.671 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 170.42 0.08 896353.34 1561.78 1023010.96 00:08:16.671 ======================================================== 00:08:16.671 Total : 343.30 0.17 894350.31 800.77 1023010.96 00:08:16.671 00:08:16.671 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 Read completed with error (sct=0, sc=8) 00:08:16.671 [2024-12-05 09:40:03.904125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:08:16.671 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:16.671 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 66145 00:08:16.671 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:16.671 [2024-12-05 09:40:03.905988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500000f300 (9): Bad file descriptor 00:08:16.671 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:16.930 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:16.930 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 66145 00:08:16.930 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (66145) - No such process 00:08:16.930 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 66145 00:08:16.930 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:16.930 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 66145 00:08:16.930 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:16.930 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.930 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:16.930 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 66145 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.931 [2024-12-05 09:40:04.427769] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=66192 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66192 00:08:16.931 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.190 [2024-12-05 09:40:04.645824] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:17.449 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.449 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66192 00:08:17.449 09:40:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:18.013 09:40:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:18.013 09:40:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66192 00:08:18.013 09:40:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:18.578 09:40:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:18.578 09:40:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66192 00:08:18.578 09:40:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:18.836 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:18.836 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66192 00:08:18.836 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:19.403 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:19.403 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66192 00:08:19.403 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:19.971 09:40:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:19.971 09:40:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66192 00:08:19.971 09:40:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:20.230 Initializing NVMe Controllers 00:08:20.230 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:20.230 Controller IO queue size 128, less than required. 00:08:20.230 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:20.230 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:20.230 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:20.230 Initialization complete. Launching workers. 00:08:20.230 ======================================================== 00:08:20.230 Latency(us) 00:08:20.230 Device Information : IOPS MiB/s Average min max 00:08:20.230 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1006944.14 1000231.71 1019964.31 00:08:20.230 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006916.81 1000239.46 1043221.58 00:08:20.230 ======================================================== 00:08:20.230 Total : 256.00 0.12 1006930.47 1000231.71 1043221.58 00:08:20.230 00:08:20.489 09:40:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:20.489 09:40:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66192 00:08:20.489 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (66192) - No such process 00:08:20.489 09:40:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 66192 00:08:20.489 09:40:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:20.489 09:40:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:20.489 09:40:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:20.489 09:40:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:20.489 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:20.489 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:20.489 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:20.489 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:20.489 rmmod nvme_tcp 00:08:20.489 rmmod nvme_fabrics 00:08:20.489 rmmod nvme_keyring 00:08:20.489 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:20.489 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:20.489 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:20.489 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 66089 ']' 00:08:20.489 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 66089 00:08:20.489 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 66089 ']' 00:08:20.489 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 66089 00:08:20.489 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:20.489 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.489 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66089 00:08:20.489 killing process with pid 66089 00:08:20.489 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.489 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.489 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66089' 00:08:20.489 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 66089 00:08:20.489 09:40:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 66089 00:08:21.426 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:21.426 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:21.426 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:21.426 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:21.426 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:21.426 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:21.426 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:21.426 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:21.426 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:21.426 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:21.426 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:21.426 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:08:21.685 00:08:21.685 real 0m10.545s 00:08:21.685 user 0m30.605s 00:08:21.685 sys 0m1.724s 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.685 ************************************ 00:08:21.685 END TEST nvmf_delete_subsystem 00:08:21.685 ************************************ 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.685 09:40:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:21.945 ************************************ 00:08:21.945 START TEST nvmf_host_management 00:08:21.945 ************************************ 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:21.945 * Looking for test storage... 00:08:21.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:21.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.945 --rc genhtml_branch_coverage=1 00:08:21.945 --rc genhtml_function_coverage=1 00:08:21.945 --rc genhtml_legend=1 00:08:21.945 --rc geninfo_all_blocks=1 00:08:21.945 --rc geninfo_unexecuted_blocks=1 00:08:21.945 00:08:21.945 ' 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:21.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.945 --rc genhtml_branch_coverage=1 00:08:21.945 --rc genhtml_function_coverage=1 00:08:21.945 --rc genhtml_legend=1 00:08:21.945 --rc geninfo_all_blocks=1 00:08:21.945 --rc geninfo_unexecuted_blocks=1 00:08:21.945 00:08:21.945 ' 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:21.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.945 --rc genhtml_branch_coverage=1 00:08:21.945 --rc genhtml_function_coverage=1 00:08:21.945 --rc genhtml_legend=1 00:08:21.945 --rc geninfo_all_blocks=1 00:08:21.945 --rc geninfo_unexecuted_blocks=1 00:08:21.945 00:08:21.945 ' 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:21.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.945 --rc genhtml_branch_coverage=1 00:08:21.945 --rc genhtml_function_coverage=1 00:08:21.945 --rc genhtml_legend=1 00:08:21.945 --rc geninfo_all_blocks=1 00:08:21.945 --rc geninfo_unexecuted_blocks=1 00:08:21.945 00:08:21.945 ' 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:21.945 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:21.946 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:21.946 Cannot find device "nvmf_init_br" 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:21.946 Cannot find device "nvmf_init_br2" 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:21.946 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:21.946 Cannot find device "nvmf_tgt_br" 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:22.205 Cannot find device "nvmf_tgt_br2" 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:22.205 Cannot find device "nvmf_init_br" 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:22.205 Cannot find device "nvmf_init_br2" 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:22.205 Cannot find device "nvmf_tgt_br" 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:22.205 Cannot find device "nvmf_tgt_br2" 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:22.205 Cannot find device "nvmf_br" 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:22.205 Cannot find device "nvmf_init_if" 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:22.205 Cannot find device "nvmf_init_if2" 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:22.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:22.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:22.205 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:22.206 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:22.206 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:22.206 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:22.206 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:22.464 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:22.464 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:22.464 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:22.464 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:22.464 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:22.464 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:22.464 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:22.464 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:22.464 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:22.464 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:22.464 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:08:22.464 00:08:22.464 --- 10.0.0.3 ping statistics --- 00:08:22.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.464 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:22.464 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:22.464 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:22.464 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:08:22.464 00:08:22.464 --- 10.0.0.4 ping statistics --- 00:08:22.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.464 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:22.464 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:22.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:08:22.465 00:08:22.465 --- 10.0.0.1 ping statistics --- 00:08:22.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.465 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:22.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:08:22.465 00:08:22.465 --- 10.0.0.2 ping statistics --- 00:08:22.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.465 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=66497 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 66497 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 66497 ']' 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.465 09:40:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.465 [2024-12-05 09:40:10.056870] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:22.465 [2024-12-05 09:40:10.057062] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.724 [2024-12-05 09:40:10.251491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.982 [2024-12-05 09:40:10.406287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.982 [2024-12-05 09:40:10.406353] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.982 [2024-12-05 09:40:10.406380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.982 [2024-12-05 09:40:10.406397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.982 [2024-12-05 09:40:10.406414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.982 [2024-12-05 09:40:10.408913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.983 [2024-12-05 09:40:10.409067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.983 [2024-12-05 09:40:10.409205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.983 [2024-12-05 09:40:10.409223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:23.551 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.551 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:23.551 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:23.551 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:23.551 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.551 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.551 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:23.551 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.551 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.551 [2024-12-05 09:40:11.088744] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.551 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.551 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:23.551 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:23.551 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.551 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:23.551 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:23.551 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:23.551 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.551 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.809 Malloc0 00:08:23.809 [2024-12-05 09:40:11.212386] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=66570 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 66570 /var/tmp/bdevperf.sock 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 66570 ']' 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:23.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:23.809 { 00:08:23.809 "params": { 00:08:23.809 "name": "Nvme$subsystem", 00:08:23.809 "trtype": "$TEST_TRANSPORT", 00:08:23.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:23.809 "adrfam": "ipv4", 00:08:23.809 "trsvcid": "$NVMF_PORT", 00:08:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:23.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:23.809 "hdgst": ${hdgst:-false}, 00:08:23.809 "ddgst": ${ddgst:-false} 00:08:23.809 }, 00:08:23.809 "method": "bdev_nvme_attach_controller" 00:08:23.809 } 00:08:23.809 EOF 00:08:23.809 )") 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:23.809 09:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:23.809 "params": { 00:08:23.809 "name": "Nvme0", 00:08:23.809 "trtype": "tcp", 00:08:23.809 "traddr": "10.0.0.3", 00:08:23.809 "adrfam": "ipv4", 00:08:23.809 "trsvcid": "4420", 00:08:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:23.809 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:23.809 "hdgst": false, 00:08:23.809 "ddgst": false 00:08:23.809 }, 00:08:23.809 "method": "bdev_nvme_attach_controller" 00:08:23.809 }' 00:08:23.809 [2024-12-05 09:40:11.358509] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:23.809 [2024-12-05 09:40:11.358640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66570 ] 00:08:24.067 [2024-12-05 09:40:11.539732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.067 [2024-12-05 09:40:11.691818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.633 Running I/O for 10 seconds... 00:08:24.892 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=323 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 323 -ge 100 ']' 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.893 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.893 [2024-12-05 09:40:12.446928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.893 [2024-12-05 09:40:12.447908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.893 [2024-12-05 09:40:12.447923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.447936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.447969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.447998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.448969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.448986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.449003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.449017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.449033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.449046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.449062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.894 [2024-12-05 09:40:12.449082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.894 [2024-12-05 09:40:12.449098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:08:24.894 task offset: 57216 on job bdev=Nvme0n1 fails 00:08:24.894 00:08:24.894 Latency(us) 00:08:24.894 [2024-12-05T09:40:12.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.894 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:24.894 Job: Nvme0n1 ended in about 0.32 seconds with error 00:08:24.894 Verification LBA range: start 0x0 length 0x400 00:08:24.894 Nvme0n1 : 0.32 1213.18 75.82 202.20 0.00 43453.51 8579.26 39559.91 00:08:24.894 [2024-12-05T09:40:12.523Z] =================================================================================================================== 00:08:24.894 [2024-12-05T09:40:12.523Z] Total : 1213.18 75.82 202.20 0.00 43453.51 8579.26 39559.91 00:08:24.894 [2024-12-05 09:40:12.449489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:24.895 [2024-12-05 09:40:12.449521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.895 [2024-12-05 09:40:12.449540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:24.895 [2024-12-05 09:40:12.449555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.895 [2024-12-05 09:40:12.449584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:24.895 [2024-12-05 09:40:12.449598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.895 [2024-12-05 09:40:12.449611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:24.895 [2024-12-05 09:40:12.449624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.895 [2024-12-05 09:40:12.449652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:08:24.895 [2024-12-05 09:40:12.450965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:24.895 [2024-12-05 09:40:12.456085] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.895 [2024-12-05 09:40:12.456140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:08:24.895 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.895 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:24.895 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.895 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.895 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.895 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:24.895 [2024-12-05 09:40:12.466666] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:25.942 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 66570 00:08:25.942 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (66570) - No such process 00:08:25.942 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:25.942 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:25.942 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:25.942 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:25.942 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:25.942 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:25.942 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:25.942 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:25.942 { 00:08:25.942 "params": { 00:08:25.942 "name": "Nvme$subsystem", 00:08:25.942 "trtype": "$TEST_TRANSPORT", 00:08:25.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:25.942 "adrfam": "ipv4", 00:08:25.942 "trsvcid": "$NVMF_PORT", 00:08:25.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:25.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:25.942 "hdgst": ${hdgst:-false}, 00:08:25.942 "ddgst": ${ddgst:-false} 00:08:25.942 }, 00:08:25.942 "method": "bdev_nvme_attach_controller" 00:08:25.942 } 00:08:25.942 EOF 00:08:25.942 )") 00:08:25.942 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:25.942 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:25.942 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:25.942 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:25.942 "params": { 00:08:25.942 "name": "Nvme0", 00:08:25.942 "trtype": "tcp", 00:08:25.942 "traddr": "10.0.0.3", 00:08:25.942 "adrfam": "ipv4", 00:08:25.942 "trsvcid": "4420", 00:08:25.942 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:25.942 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:25.942 "hdgst": false, 00:08:25.942 "ddgst": false 00:08:25.942 }, 00:08:25.942 "method": "bdev_nvme_attach_controller" 00:08:25.942 }' 00:08:26.201 [2024-12-05 09:40:13.569766] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:26.201 [2024-12-05 09:40:13.569909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66628 ] 00:08:26.201 [2024-12-05 09:40:13.737542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.460 [2024-12-05 09:40:13.852265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.718 Running I/O for 1 seconds... 00:08:27.656 1472.00 IOPS, 92.00 MiB/s 00:08:27.656 Latency(us) 00:08:27.656 [2024-12-05T09:40:15.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.656 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:27.656 Verification LBA range: start 0x0 length 0x400 00:08:27.656 Nvme0n1 : 1.03 1492.50 93.28 0.00 0.00 42180.52 8043.05 36938.47 00:08:27.656 [2024-12-05T09:40:15.285Z] =================================================================================================================== 00:08:27.656 [2024-12-05T09:40:15.285Z] Total : 1492.50 93.28 0.00 0.00 42180.52 8043.05 36938.47 00:08:28.590 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:28.590 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:28.590 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:28.849 rmmod nvme_tcp 00:08:28.849 rmmod nvme_fabrics 00:08:28.849 rmmod nvme_keyring 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 66497 ']' 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 66497 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 66497 ']' 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 66497 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66497 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66497' 00:08:28.849 killing process with pid 66497 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 66497 00:08:28.849 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 66497 00:08:30.226 [2024-12-05 09:40:17.441014] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:30.226 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:30.226 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:30.226 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:30.226 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:30.226 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:30.226 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:30.226 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:30.226 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:30.226 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:30.227 00:08:30.227 real 0m8.465s 00:08:30.227 user 0m32.256s 00:08:30.227 sys 0m1.882s 00:08:30.227 ************************************ 00:08:30.227 END TEST nvmf_host_management 00:08:30.227 ************************************ 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.227 ************************************ 00:08:30.227 START TEST nvmf_lvol 00:08:30.227 ************************************ 00:08:30.227 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:30.487 * Looking for test storage... 00:08:30.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.487 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:30.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.487 --rc genhtml_branch_coverage=1 00:08:30.487 --rc genhtml_function_coverage=1 00:08:30.488 --rc genhtml_legend=1 00:08:30.488 --rc geninfo_all_blocks=1 00:08:30.488 --rc geninfo_unexecuted_blocks=1 00:08:30.488 00:08:30.488 ' 00:08:30.488 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:30.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.488 --rc genhtml_branch_coverage=1 00:08:30.488 --rc genhtml_function_coverage=1 00:08:30.488 --rc genhtml_legend=1 00:08:30.488 --rc geninfo_all_blocks=1 00:08:30.488 --rc geninfo_unexecuted_blocks=1 00:08:30.488 00:08:30.488 ' 00:08:30.488 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:30.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.488 --rc genhtml_branch_coverage=1 00:08:30.488 --rc genhtml_function_coverage=1 00:08:30.488 --rc genhtml_legend=1 00:08:30.488 --rc geninfo_all_blocks=1 00:08:30.488 --rc geninfo_unexecuted_blocks=1 00:08:30.488 00:08:30.488 ' 00:08:30.488 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:30.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.488 --rc genhtml_branch_coverage=1 00:08:30.488 --rc genhtml_function_coverage=1 00:08:30.488 --rc genhtml_legend=1 00:08:30.488 --rc geninfo_all_blocks=1 00:08:30.488 --rc geninfo_unexecuted_blocks=1 00:08:30.488 00:08:30.488 ' 00:08:30.488 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:30.488 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.488 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:30.488 Cannot find device "nvmf_init_br" 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:30.488 Cannot find device "nvmf_init_br2" 00:08:30.488 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:30.489 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:30.489 Cannot find device "nvmf_tgt_br" 00:08:30.489 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:30.489 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:30.489 Cannot find device "nvmf_tgt_br2" 00:08:30.489 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:30.489 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:30.489 Cannot find device "nvmf_init_br" 00:08:30.489 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:30.489 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:30.489 Cannot find device "nvmf_init_br2" 00:08:30.489 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:30.489 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:30.747 Cannot find device "nvmf_tgt_br" 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:30.748 Cannot find device "nvmf_tgt_br2" 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:30.748 Cannot find device "nvmf_br" 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:30.748 Cannot find device "nvmf_init_if" 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:30.748 Cannot find device "nvmf_init_if2" 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:30.748 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:30.748 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:30.748 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:31.007 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:31.007 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:08:31.007 00:08:31.007 --- 10.0.0.3 ping statistics --- 00:08:31.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.007 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:31.007 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:31.007 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:08:31.007 00:08:31.007 --- 10.0.0.4 ping statistics --- 00:08:31.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.007 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:31.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:08:31.007 00:08:31.007 --- 10.0.0.1 ping statistics --- 00:08:31.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.007 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:31.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:08:31.007 00:08:31.007 --- 10.0.0.2 ping statistics --- 00:08:31.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.007 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:31.007 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:31.008 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:31.008 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:31.008 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=66911 00:08:31.008 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 66911 00:08:31.008 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 66911 ']' 00:08:31.008 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:31.008 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.008 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.008 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.008 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.008 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:31.008 [2024-12-05 09:40:18.600638] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:31.008 [2024-12-05 09:40:18.600808] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.266 [2024-12-05 09:40:18.797715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:31.525 [2024-12-05 09:40:18.947890] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.525 [2024-12-05 09:40:18.947993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.525 [2024-12-05 09:40:18.948030] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.525 [2024-12-05 09:40:18.948042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.525 [2024-12-05 09:40:18.948056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.525 [2024-12-05 09:40:18.950250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.525 [2024-12-05 09:40:18.950388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.525 [2024-12-05 09:40:18.950418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.093 09:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.093 09:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:32.093 09:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:32.093 09:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:32.093 09:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:32.093 09:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.093 09:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:32.352 [2024-12-05 09:40:19.943681] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.352 09:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:32.921 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:32.921 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:33.179 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:33.179 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:33.438 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:33.697 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=66eda0af-2eb8-488e-872b-ccbaf19d23b7 00:08:33.697 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 66eda0af-2eb8-488e-872b-ccbaf19d23b7 lvol 20 00:08:33.955 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d1bf4b6b-46b4-436d-b5eb-cb926bd50cd2 00:08:33.955 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:34.214 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d1bf4b6b-46b4-436d-b5eb-cb926bd50cd2 00:08:34.473 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:34.732 [2024-12-05 09:40:22.176640] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:34.732 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:34.991 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=67060 00:08:34.991 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:34.991 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:35.926 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot d1bf4b6b-46b4-436d-b5eb-cb926bd50cd2 MY_SNAPSHOT 00:08:36.184 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=105d3c03-c65c-4a99-a30a-ee1baa0c1c59 00:08:36.184 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize d1bf4b6b-46b4-436d-b5eb-cb926bd50cd2 30 00:08:36.751 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 105d3c03-c65c-4a99-a30a-ee1baa0c1c59 MY_CLONE 00:08:36.751 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=62bc466a-fb3c-490c-8ef9-69a74e82d6ce 00:08:36.751 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 62bc466a-fb3c-490c-8ef9-69a74e82d6ce 00:08:37.687 09:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 67060 00:08:45.793 Initializing NVMe Controllers 00:08:45.793 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:45.793 Controller IO queue size 128, less than required. 00:08:45.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:45.793 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:45.793 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:45.793 Initialization complete. Launching workers. 00:08:45.793 ======================================================== 00:08:45.793 Latency(us) 00:08:45.793 Device Information : IOPS MiB/s Average min max 00:08:45.793 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9491.30 37.08 13487.21 332.30 186620.85 00:08:45.793 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9297.70 36.32 13770.51 2967.24 208913.36 00:08:45.793 ======================================================== 00:08:45.793 Total : 18789.00 73.39 13627.40 332.30 208913.36 00:08:45.793 00:08:45.793 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:45.793 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d1bf4b6b-46b4-436d-b5eb-cb926bd50cd2 00:08:45.793 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 66eda0af-2eb8-488e-872b-ccbaf19d23b7 00:08:46.052 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:46.052 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:46.052 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:46.052 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:46.052 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:46.052 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:46.052 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:46.052 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:46.052 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:46.310 rmmod nvme_tcp 00:08:46.310 rmmod nvme_fabrics 00:08:46.310 rmmod nvme_keyring 00:08:46.310 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:46.310 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:46.310 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:46.310 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 66911 ']' 00:08:46.310 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 66911 00:08:46.310 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 66911 ']' 00:08:46.310 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 66911 00:08:46.310 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:46.310 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.310 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66911 00:08:46.310 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.310 killing process with pid 66911 00:08:46.310 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.310 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66911' 00:08:46.310 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 66911 00:08:46.310 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 66911 00:08:47.687 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:47.687 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:47.687 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:47.687 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:47.687 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:47.687 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:47.687 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:47.687 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:47.687 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:47.687 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:47.687 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:47.687 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:47.687 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:47.687 ************************************ 00:08:47.687 END TEST nvmf_lvol 00:08:47.687 ************************************ 00:08:47.687 00:08:47.687 real 0m17.329s 00:08:47.687 user 1m9.358s 00:08:47.687 sys 0m3.836s 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.687 ************************************ 00:08:47.687 START TEST nvmf_lvs_grow 00:08:47.687 ************************************ 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:47.687 * Looking for test storage... 00:08:47.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:47.687 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:47.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.946 --rc genhtml_branch_coverage=1 00:08:47.946 --rc genhtml_function_coverage=1 00:08:47.946 --rc genhtml_legend=1 00:08:47.946 --rc geninfo_all_blocks=1 00:08:47.946 --rc geninfo_unexecuted_blocks=1 00:08:47.946 00:08:47.946 ' 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:47.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.946 --rc genhtml_branch_coverage=1 00:08:47.946 --rc genhtml_function_coverage=1 00:08:47.946 --rc genhtml_legend=1 00:08:47.946 --rc geninfo_all_blocks=1 00:08:47.946 --rc geninfo_unexecuted_blocks=1 00:08:47.946 00:08:47.946 ' 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:47.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.946 --rc genhtml_branch_coverage=1 00:08:47.946 --rc genhtml_function_coverage=1 00:08:47.946 --rc genhtml_legend=1 00:08:47.946 --rc geninfo_all_blocks=1 00:08:47.946 --rc geninfo_unexecuted_blocks=1 00:08:47.946 00:08:47.946 ' 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:47.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.946 --rc genhtml_branch_coverage=1 00:08:47.946 --rc genhtml_function_coverage=1 00:08:47.946 --rc genhtml_legend=1 00:08:47.946 --rc geninfo_all_blocks=1 00:08:47.946 --rc geninfo_unexecuted_blocks=1 00:08:47.946 00:08:47.946 ' 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.946 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:47.946 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:47.947 Cannot find device "nvmf_init_br" 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:47.947 Cannot find device "nvmf_init_br2" 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:47.947 Cannot find device "nvmf_tgt_br" 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:47.947 Cannot find device "nvmf_tgt_br2" 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:47.947 Cannot find device "nvmf_init_br" 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:47.947 Cannot find device "nvmf_init_br2" 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:47.947 Cannot find device "nvmf_tgt_br" 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:47.947 Cannot find device "nvmf_tgt_br2" 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:47.947 Cannot find device "nvmf_br" 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:47.947 Cannot find device "nvmf_init_if" 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:47.947 Cannot find device "nvmf_init_if2" 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:47.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:47.947 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:47.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:48.205 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:48.205 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:48.205 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:08:48.205 00:08:48.205 --- 10.0.0.3 ping statistics --- 00:08:48.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.205 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:48.481 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:48.481 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:08:48.481 00:08:48.481 --- 10.0.0.4 ping statistics --- 00:08:48.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.481 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:48.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:08:48.481 00:08:48.481 --- 10.0.0.1 ping statistics --- 00:08:48.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.481 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:48.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:08:48.481 00:08:48.481 --- 10.0.0.2 ping statistics --- 00:08:48.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.481 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=67491 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 67491 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 67491 ']' 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.481 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.481 [2024-12-05 09:40:36.007285] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:48.481 [2024-12-05 09:40:36.007443] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.739 [2024-12-05 09:40:36.179912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.739 [2024-12-05 09:40:36.280575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.739 [2024-12-05 09:40:36.280851] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.739 [2024-12-05 09:40:36.280881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.739 [2024-12-05 09:40:36.280924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.739 [2024-12-05 09:40:36.280938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.739 [2024-12-05 09:40:36.282194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.305 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.305 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:49.305 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:49.305 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:49.305 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.563 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.563 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:49.563 [2024-12-05 09:40:37.152841] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.563 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:49.563 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.563 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.563 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.563 ************************************ 00:08:49.563 START TEST lvs_grow_clean 00:08:49.563 ************************************ 00:08:49.563 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:49.563 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:49.563 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:49.563 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:49.563 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:49.563 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:49.563 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:49.563 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:49.822 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:49.822 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:50.080 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:50.080 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:50.339 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=05734eb7-d22b-434f-8de5-d7a59717bc68 00:08:50.339 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05734eb7-d22b-434f-8de5-d7a59717bc68 00:08:50.339 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:50.597 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:50.597 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:50.597 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 05734eb7-d22b-434f-8de5-d7a59717bc68 lvol 150 00:08:50.856 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=46f76981-180b-4964-bafd-3a0c558c1285 00:08:50.856 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:50.856 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:51.115 [2024-12-05 09:40:38.623278] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:51.115 [2024-12-05 09:40:38.623562] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:51.115 true 00:08:51.115 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05734eb7-d22b-434f-8de5-d7a59717bc68 00:08:51.115 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:51.374 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:51.374 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:51.647 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 46f76981-180b-4964-bafd-3a0c558c1285 00:08:51.934 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:52.193 [2024-12-05 09:40:39.596038] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:52.193 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:52.451 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:52.452 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=67658 00:08:52.452 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:52.452 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 67658 /var/tmp/bdevperf.sock 00:08:52.452 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 67658 ']' 00:08:52.452 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:52.452 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:52.452 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:52.452 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.452 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:52.452 [2024-12-05 09:40:39.909441] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:52.452 [2024-12-05 09:40:39.909622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67658 ] 00:08:52.452 [2024-12-05 09:40:40.077837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.711 [2024-12-05 09:40:40.201893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.646 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.646 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:53.646 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:53.646 Nvme0n1 00:08:53.646 09:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:53.906 [ 00:08:53.906 { 00:08:53.906 "aliases": [ 00:08:53.906 "46f76981-180b-4964-bafd-3a0c558c1285" 00:08:53.906 ], 00:08:53.906 "assigned_rate_limits": { 00:08:53.906 "r_mbytes_per_sec": 0, 00:08:53.906 "rw_ios_per_sec": 0, 00:08:53.906 "rw_mbytes_per_sec": 0, 00:08:53.906 "w_mbytes_per_sec": 0 00:08:53.906 }, 00:08:53.906 "block_size": 4096, 00:08:53.906 "claimed": false, 00:08:53.906 "driver_specific": { 00:08:53.906 "mp_policy": "active_passive", 00:08:53.906 "nvme": [ 00:08:53.906 { 00:08:53.906 "ctrlr_data": { 00:08:53.906 "ana_reporting": false, 00:08:53.906 "cntlid": 1, 00:08:53.906 "firmware_revision": "25.01", 00:08:53.906 "model_number": "SPDK bdev Controller", 00:08:53.906 "multi_ctrlr": true, 00:08:53.906 "oacs": { 00:08:53.906 "firmware": 0, 00:08:53.906 "format": 0, 00:08:53.906 "ns_manage": 0, 00:08:53.906 "security": 0 00:08:53.906 }, 00:08:53.906 "serial_number": "SPDK0", 00:08:53.906 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:53.906 "vendor_id": "0x8086" 00:08:53.906 }, 00:08:53.906 "ns_data": { 00:08:53.906 "can_share": true, 00:08:53.906 "id": 1 00:08:53.906 }, 00:08:53.906 "trid": { 00:08:53.906 "adrfam": "IPv4", 00:08:53.906 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:53.906 "traddr": "10.0.0.3", 00:08:53.906 "trsvcid": "4420", 00:08:53.906 "trtype": "TCP" 00:08:53.906 }, 00:08:53.906 "vs": { 00:08:53.906 "nvme_version": "1.3" 00:08:53.906 } 00:08:53.906 } 00:08:53.906 ] 00:08:53.906 }, 00:08:53.906 "memory_domains": [ 00:08:53.906 { 00:08:53.906 "dma_device_id": "system", 00:08:53.906 "dma_device_type": 1 00:08:53.906 } 00:08:53.906 ], 00:08:53.906 "name": "Nvme0n1", 00:08:53.906 "num_blocks": 38912, 00:08:53.906 "numa_id": -1, 00:08:53.906 "product_name": "NVMe disk", 00:08:53.906 "supported_io_types": { 00:08:53.906 "abort": true, 00:08:53.906 "compare": true, 00:08:53.906 "compare_and_write": true, 00:08:53.906 "copy": true, 00:08:53.906 "flush": true, 00:08:53.906 "get_zone_info": false, 00:08:53.906 "nvme_admin": true, 00:08:53.906 "nvme_io": true, 00:08:53.906 "nvme_io_md": false, 00:08:53.906 "nvme_iov_md": false, 00:08:53.906 "read": true, 00:08:53.906 "reset": true, 00:08:53.906 "seek_data": false, 00:08:53.906 "seek_hole": false, 00:08:53.906 "unmap": true, 00:08:53.906 "write": true, 00:08:53.906 "write_zeroes": true, 00:08:53.906 "zcopy": false, 00:08:53.906 "zone_append": false, 00:08:53.906 "zone_management": false 00:08:53.906 }, 00:08:53.906 "uuid": "46f76981-180b-4964-bafd-3a0c558c1285", 00:08:53.906 "zoned": false 00:08:53.906 } 00:08:53.906 ] 00:08:53.906 09:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=67706 00:08:53.906 09:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:53.906 09:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:54.165 Running I/O for 10 seconds... 00:08:55.101 Latency(us) 00:08:55.101 [2024-12-05T09:40:42.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.101 Nvme0n1 : 1.00 5976.00 23.34 0.00 0.00 0.00 0.00 0.00 00:08:55.101 [2024-12-05T09:40:42.730Z] =================================================================================================================== 00:08:55.101 [2024-12-05T09:40:42.730Z] Total : 5976.00 23.34 0.00 0.00 0.00 0.00 0.00 00:08:55.101 00:08:56.047 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 05734eb7-d22b-434f-8de5-d7a59717bc68 00:08:56.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.047 Nvme0n1 : 2.00 6275.50 24.51 0.00 0.00 0.00 0.00 0.00 00:08:56.047 [2024-12-05T09:40:43.676Z] =================================================================================================================== 00:08:56.047 [2024-12-05T09:40:43.676Z] Total : 6275.50 24.51 0.00 0.00 0.00 0.00 0.00 00:08:56.047 00:08:56.306 true 00:08:56.306 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:56.306 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05734eb7-d22b-434f-8de5-d7a59717bc68 00:08:56.566 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:56.566 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:56.566 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 67706 00:08:57.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.133 Nvme0n1 : 3.00 6288.33 24.56 0.00 0.00 0.00 0.00 0.00 00:08:57.133 [2024-12-05T09:40:44.763Z] =================================================================================================================== 00:08:57.134 [2024-12-05T09:40:44.763Z] Total : 6288.33 24.56 0.00 0.00 0.00 0.00 0.00 00:08:57.134 00:08:58.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.071 Nvme0n1 : 4.00 6277.75 24.52 0.00 0.00 0.00 0.00 0.00 00:08:58.071 [2024-12-05T09:40:45.700Z] =================================================================================================================== 00:08:58.071 [2024-12-05T09:40:45.700Z] Total : 6277.75 24.52 0.00 0.00 0.00 0.00 0.00 00:08:58.071 00:08:59.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.003 Nvme0n1 : 5.00 6264.00 24.47 0.00 0.00 0.00 0.00 0.00 00:08:59.003 [2024-12-05T09:40:46.632Z] =================================================================================================================== 00:08:59.003 [2024-12-05T09:40:46.632Z] Total : 6264.00 24.47 0.00 0.00 0.00 0.00 0.00 00:08:59.003 00:09:00.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.378 Nvme0n1 : 6.00 6256.50 24.44 0.00 0.00 0.00 0.00 0.00 00:09:00.378 [2024-12-05T09:40:48.007Z] =================================================================================================================== 00:09:00.378 [2024-12-05T09:40:48.007Z] Total : 6256.50 24.44 0.00 0.00 0.00 0.00 0.00 00:09:00.378 00:09:01.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.314 Nvme0n1 : 7.00 6233.86 24.35 0.00 0.00 0.00 0.00 0.00 00:09:01.314 [2024-12-05T09:40:48.943Z] =================================================================================================================== 00:09:01.314 [2024-12-05T09:40:48.943Z] Total : 6233.86 24.35 0.00 0.00 0.00 0.00 0.00 00:09:01.314 00:09:02.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.251 Nvme0n1 : 8.00 6205.50 24.24 0.00 0.00 0.00 0.00 0.00 00:09:02.251 [2024-12-05T09:40:49.880Z] =================================================================================================================== 00:09:02.251 [2024-12-05T09:40:49.880Z] Total : 6205.50 24.24 0.00 0.00 0.00 0.00 0.00 00:09:02.251 00:09:03.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.186 Nvme0n1 : 9.00 6176.33 24.13 0.00 0.00 0.00 0.00 0.00 00:09:03.186 [2024-12-05T09:40:50.815Z] =================================================================================================================== 00:09:03.186 [2024-12-05T09:40:50.815Z] Total : 6176.33 24.13 0.00 0.00 0.00 0.00 0.00 00:09:03.186 00:09:04.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.121 Nvme0n1 : 10.00 6160.60 24.06 0.00 0.00 0.00 0.00 0.00 00:09:04.121 [2024-12-05T09:40:51.750Z] =================================================================================================================== 00:09:04.121 [2024-12-05T09:40:51.750Z] Total : 6160.60 24.06 0.00 0.00 0.00 0.00 0.00 00:09:04.121 00:09:04.121 00:09:04.121 Latency(us) 00:09:04.121 [2024-12-05T09:40:51.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.121 Nvme0n1 : 10.01 6167.32 24.09 0.00 0.00 20748.50 6374.87 47900.86 00:09:04.121 [2024-12-05T09:40:51.750Z] =================================================================================================================== 00:09:04.121 [2024-12-05T09:40:51.750Z] Total : 6167.32 24.09 0.00 0.00 20748.50 6374.87 47900.86 00:09:04.121 { 00:09:04.121 "results": [ 00:09:04.121 { 00:09:04.121 "job": "Nvme0n1", 00:09:04.121 "core_mask": "0x2", 00:09:04.121 "workload": "randwrite", 00:09:04.121 "status": "finished", 00:09:04.121 "queue_depth": 128, 00:09:04.121 "io_size": 4096, 00:09:04.121 "runtime": 10.009854, 00:09:04.121 "iops": 6167.322720191523, 00:09:04.121 "mibps": 24.091104375748138, 00:09:04.121 "io_failed": 0, 00:09:04.121 "io_timeout": 0, 00:09:04.121 "avg_latency_us": 20748.495811649394, 00:09:04.122 "min_latency_us": 6374.865454545455, 00:09:04.122 "max_latency_us": 47900.858181818185 00:09:04.122 } 00:09:04.122 ], 00:09:04.122 "core_count": 1 00:09:04.122 } 00:09:04.122 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 67658 00:09:04.122 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 67658 ']' 00:09:04.122 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 67658 00:09:04.122 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:04.122 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.122 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67658 00:09:04.122 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:04.122 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:04.122 killing process with pid 67658 00:09:04.122 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67658' 00:09:04.122 Received shutdown signal, test time was about 10.000000 seconds 00:09:04.122 00:09:04.122 Latency(us) 00:09:04.122 [2024-12-05T09:40:51.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.122 [2024-12-05T09:40:51.751Z] =================================================================================================================== 00:09:04.122 [2024-12-05T09:40:51.751Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:04.122 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 67658 00:09:04.122 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 67658 00:09:05.058 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:05.317 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:05.576 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05734eb7-d22b-434f-8de5-d7a59717bc68 00:09:05.576 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:05.834 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:05.834 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:05.834 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:06.093 [2024-12-05 09:40:53.487633] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:06.093 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05734eb7-d22b-434f-8de5-d7a59717bc68 00:09:06.093 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:06.093 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05734eb7-d22b-434f-8de5-d7a59717bc68 00:09:06.093 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.093 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:06.093 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.093 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:06.093 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.093 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:06.093 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.093 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:06.093 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05734eb7-d22b-434f-8de5-d7a59717bc68 00:09:06.352 2024/12/05 09:40:53 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:05734eb7-d22b-434f-8de5-d7a59717bc68], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:09:06.352 request: 00:09:06.352 { 00:09:06.352 "method": "bdev_lvol_get_lvstores", 00:09:06.352 "params": { 00:09:06.352 "uuid": "05734eb7-d22b-434f-8de5-d7a59717bc68" 00:09:06.352 } 00:09:06.352 } 00:09:06.352 Got JSON-RPC error response 00:09:06.352 GoRPCClient: error on JSON-RPC call 00:09:06.353 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:06.353 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:06.353 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:06.353 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:06.353 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:06.611 aio_bdev 00:09:06.611 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 46f76981-180b-4964-bafd-3a0c558c1285 00:09:06.611 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=46f76981-180b-4964-bafd-3a0c558c1285 00:09:06.611 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.611 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:06.611 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.611 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.611 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:06.870 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 46f76981-180b-4964-bafd-3a0c558c1285 -t 2000 00:09:07.137 [ 00:09:07.137 { 00:09:07.137 "aliases": [ 00:09:07.137 "lvs/lvol" 00:09:07.137 ], 00:09:07.137 "assigned_rate_limits": { 00:09:07.137 "r_mbytes_per_sec": 0, 00:09:07.137 "rw_ios_per_sec": 0, 00:09:07.137 "rw_mbytes_per_sec": 0, 00:09:07.137 "w_mbytes_per_sec": 0 00:09:07.137 }, 00:09:07.137 "block_size": 4096, 00:09:07.137 "claimed": false, 00:09:07.137 "driver_specific": { 00:09:07.137 "lvol": { 00:09:07.137 "base_bdev": "aio_bdev", 00:09:07.137 "clone": false, 00:09:07.137 "esnap_clone": false, 00:09:07.137 "lvol_store_uuid": "05734eb7-d22b-434f-8de5-d7a59717bc68", 00:09:07.137 "num_allocated_clusters": 38, 00:09:07.137 "snapshot": false, 00:09:07.137 "thin_provision": false 00:09:07.137 } 00:09:07.137 }, 00:09:07.137 "name": "46f76981-180b-4964-bafd-3a0c558c1285", 00:09:07.137 "num_blocks": 38912, 00:09:07.138 "product_name": "Logical Volume", 00:09:07.138 "supported_io_types": { 00:09:07.138 "abort": false, 00:09:07.138 "compare": false, 00:09:07.138 "compare_and_write": false, 00:09:07.138 "copy": false, 00:09:07.138 "flush": false, 00:09:07.138 "get_zone_info": false, 00:09:07.138 "nvme_admin": false, 00:09:07.138 "nvme_io": false, 00:09:07.138 "nvme_io_md": false, 00:09:07.138 "nvme_iov_md": false, 00:09:07.138 "read": true, 00:09:07.138 "reset": true, 00:09:07.138 "seek_data": true, 00:09:07.138 "seek_hole": true, 00:09:07.138 "unmap": true, 00:09:07.138 "write": true, 00:09:07.138 "write_zeroes": true, 00:09:07.138 "zcopy": false, 00:09:07.138 "zone_append": false, 00:09:07.138 "zone_management": false 00:09:07.138 }, 00:09:07.138 "uuid": "46f76981-180b-4964-bafd-3a0c558c1285", 00:09:07.138 "zoned": false 00:09:07.138 } 00:09:07.138 ] 00:09:07.138 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:07.138 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:07.138 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05734eb7-d22b-434f-8de5-d7a59717bc68 00:09:07.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:07.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05734eb7-d22b-434f-8de5-d7a59717bc68 00:09:07.401 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:07.662 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:07.662 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 46f76981-180b-4964-bafd-3a0c558c1285 00:09:07.922 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 05734eb7-d22b-434f-8de5-d7a59717bc68 00:09:08.180 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:08.438 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:08.696 ************************************ 00:09:08.696 END TEST lvs_grow_clean 00:09:08.696 ************************************ 00:09:08.696 00:09:08.696 real 0m19.129s 00:09:08.696 user 0m18.617s 00:09:08.696 sys 0m2.167s 00:09:08.696 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.696 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:08.954 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:08.954 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:08.954 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.954 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.954 ************************************ 00:09:08.954 START TEST lvs_grow_dirty 00:09:08.954 ************************************ 00:09:08.954 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:08.954 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:08.954 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:08.954 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:08.954 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:08.954 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:08.954 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:08.954 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:08.954 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:08.954 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:09.211 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:09.211 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:09.469 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fb5e6f2c-1b7f-4460-996b-b76040a94c6b 00:09:09.469 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb5e6f2c-1b7f-4460-996b-b76040a94c6b 00:09:09.469 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:09.726 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:09.726 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:09.726 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fb5e6f2c-1b7f-4460-996b-b76040a94c6b lvol 150 00:09:09.983 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b397f356-0ba3-4f97-858c-7ee7461221d5 00:09:09.983 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:09.983 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:10.261 [2024-12-05 09:40:57.731150] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:10.261 [2024-12-05 09:40:57.731267] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:10.261 true 00:09:10.261 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb5e6f2c-1b7f-4460-996b-b76040a94c6b 00:09:10.261 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:10.518 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:10.518 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:10.777 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b397f356-0ba3-4f97-858c-7ee7461221d5 00:09:11.036 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:11.294 [2024-12-05 09:40:58.715783] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:11.294 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:11.552 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:11.552 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68117 00:09:11.552 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:11.552 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68117 /var/tmp/bdevperf.sock 00:09:11.552 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 68117 ']' 00:09:11.552 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:11.552 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:11.552 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:11.552 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.552 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:11.552 [2024-12-05 09:40:59.092920] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:11.552 [2024-12-05 09:40:59.093118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68117 ] 00:09:11.809 [2024-12-05 09:40:59.265205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.809 [2024-12-05 09:40:59.408446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.746 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.746 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:12.746 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:12.746 Nvme0n1 00:09:13.005 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:13.005 [ 00:09:13.005 { 00:09:13.005 "aliases": [ 00:09:13.005 "b397f356-0ba3-4f97-858c-7ee7461221d5" 00:09:13.005 ], 00:09:13.005 "assigned_rate_limits": { 00:09:13.005 "r_mbytes_per_sec": 0, 00:09:13.005 "rw_ios_per_sec": 0, 00:09:13.005 "rw_mbytes_per_sec": 0, 00:09:13.005 "w_mbytes_per_sec": 0 00:09:13.005 }, 00:09:13.005 "block_size": 4096, 00:09:13.005 "claimed": false, 00:09:13.005 "driver_specific": { 00:09:13.005 "mp_policy": "active_passive", 00:09:13.005 "nvme": [ 00:09:13.005 { 00:09:13.005 "ctrlr_data": { 00:09:13.005 "ana_reporting": false, 00:09:13.005 "cntlid": 1, 00:09:13.005 "firmware_revision": "25.01", 00:09:13.005 "model_number": "SPDK bdev Controller", 00:09:13.005 "multi_ctrlr": true, 00:09:13.005 "oacs": { 00:09:13.005 "firmware": 0, 00:09:13.005 "format": 0, 00:09:13.005 "ns_manage": 0, 00:09:13.005 "security": 0 00:09:13.005 }, 00:09:13.005 "serial_number": "SPDK0", 00:09:13.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:13.005 "vendor_id": "0x8086" 00:09:13.005 }, 00:09:13.005 "ns_data": { 00:09:13.005 "can_share": true, 00:09:13.005 "id": 1 00:09:13.005 }, 00:09:13.005 "trid": { 00:09:13.005 "adrfam": "IPv4", 00:09:13.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:13.005 "traddr": "10.0.0.3", 00:09:13.005 "trsvcid": "4420", 00:09:13.005 "trtype": "TCP" 00:09:13.005 }, 00:09:13.005 "vs": { 00:09:13.005 "nvme_version": "1.3" 00:09:13.005 } 00:09:13.005 } 00:09:13.005 ] 00:09:13.005 }, 00:09:13.005 "memory_domains": [ 00:09:13.005 { 00:09:13.005 "dma_device_id": "system", 00:09:13.005 "dma_device_type": 1 00:09:13.005 } 00:09:13.005 ], 00:09:13.005 "name": "Nvme0n1", 00:09:13.005 "num_blocks": 38912, 00:09:13.005 "numa_id": -1, 00:09:13.005 "product_name": "NVMe disk", 00:09:13.005 "supported_io_types": { 00:09:13.005 "abort": true, 00:09:13.005 "compare": true, 00:09:13.005 "compare_and_write": true, 00:09:13.005 "copy": true, 00:09:13.005 "flush": true, 00:09:13.005 "get_zone_info": false, 00:09:13.005 "nvme_admin": true, 00:09:13.005 "nvme_io": true, 00:09:13.005 "nvme_io_md": false, 00:09:13.005 "nvme_iov_md": false, 00:09:13.005 "read": true, 00:09:13.005 "reset": true, 00:09:13.005 "seek_data": false, 00:09:13.005 "seek_hole": false, 00:09:13.005 "unmap": true, 00:09:13.005 "write": true, 00:09:13.005 "write_zeroes": true, 00:09:13.005 "zcopy": false, 00:09:13.005 "zone_append": false, 00:09:13.005 "zone_management": false 00:09:13.005 }, 00:09:13.005 "uuid": "b397f356-0ba3-4f97-858c-7ee7461221d5", 00:09:13.005 "zoned": false 00:09:13.005 } 00:09:13.005 ] 00:09:13.005 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:13.005 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68166 00:09:13.005 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:13.264 Running I/O for 10 seconds... 00:09:14.201 Latency(us) 00:09:14.201 [2024-12-05T09:41:01.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.201 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.201 Nvme0n1 : 1.00 6199.00 24.21 0.00 0.00 0.00 0.00 0.00 00:09:14.201 [2024-12-05T09:41:01.830Z] =================================================================================================================== 00:09:14.201 [2024-12-05T09:41:01.830Z] Total : 6199.00 24.21 0.00 0.00 0.00 0.00 0.00 00:09:14.201 00:09:15.139 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fb5e6f2c-1b7f-4460-996b-b76040a94c6b 00:09:15.139 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.139 Nvme0n1 : 2.00 6235.00 24.36 0.00 0.00 0.00 0.00 0.00 00:09:15.139 [2024-12-05T09:41:02.768Z] =================================================================================================================== 00:09:15.139 [2024-12-05T09:41:02.768Z] Total : 6235.00 24.36 0.00 0.00 0.00 0.00 0.00 00:09:15.139 00:09:15.398 true 00:09:15.398 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:15.398 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb5e6f2c-1b7f-4460-996b-b76040a94c6b 00:09:15.966 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:15.966 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:15.966 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 68166 00:09:16.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.224 Nvme0n1 : 3.00 6276.00 24.52 0.00 0.00 0.00 0.00 0.00 00:09:16.224 [2024-12-05T09:41:03.853Z] =================================================================================================================== 00:09:16.224 [2024-12-05T09:41:03.853Z] Total : 6276.00 24.52 0.00 0.00 0.00 0.00 0.00 00:09:16.224 00:09:17.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.161 Nvme0n1 : 4.00 6260.25 24.45 0.00 0.00 0.00 0.00 0.00 00:09:17.161 [2024-12-05T09:41:04.790Z] =================================================================================================================== 00:09:17.161 [2024-12-05T09:41:04.790Z] Total : 6260.25 24.45 0.00 0.00 0.00 0.00 0.00 00:09:17.161 00:09:18.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.095 Nvme0n1 : 5.00 6165.40 24.08 0.00 0.00 0.00 0.00 0.00 00:09:18.095 [2024-12-05T09:41:05.724Z] =================================================================================================================== 00:09:18.095 [2024-12-05T09:41:05.724Z] Total : 6165.40 24.08 0.00 0.00 0.00 0.00 0.00 00:09:18.095 00:09:19.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.467 Nvme0n1 : 6.00 6155.33 24.04 0.00 0.00 0.00 0.00 0.00 00:09:19.467 [2024-12-05T09:41:07.096Z] =================================================================================================================== 00:09:19.467 [2024-12-05T09:41:07.096Z] Total : 6155.33 24.04 0.00 0.00 0.00 0.00 0.00 00:09:19.467 00:09:20.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.404 Nvme0n1 : 7.00 6165.57 24.08 0.00 0.00 0.00 0.00 0.00 00:09:20.404 [2024-12-05T09:41:08.033Z] =================================================================================================================== 00:09:20.404 [2024-12-05T09:41:08.033Z] Total : 6165.57 24.08 0.00 0.00 0.00 0.00 0.00 00:09:20.404 00:09:21.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.339 Nvme0n1 : 8.00 6172.62 24.11 0.00 0.00 0.00 0.00 0.00 00:09:21.339 [2024-12-05T09:41:08.968Z] =================================================================================================================== 00:09:21.339 [2024-12-05T09:41:08.968Z] Total : 6172.62 24.11 0.00 0.00 0.00 0.00 0.00 00:09:21.339 00:09:22.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.275 Nvme0n1 : 9.00 6163.89 24.08 0.00 0.00 0.00 0.00 0.00 00:09:22.275 [2024-12-05T09:41:09.904Z] =================================================================================================================== 00:09:22.275 [2024-12-05T09:41:09.904Z] Total : 6163.89 24.08 0.00 0.00 0.00 0.00 0.00 00:09:22.275 00:09:23.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.212 Nvme0n1 : 10.00 6165.40 24.08 0.00 0.00 0.00 0.00 0.00 00:09:23.212 [2024-12-05T09:41:10.841Z] =================================================================================================================== 00:09:23.212 [2024-12-05T09:41:10.841Z] Total : 6165.40 24.08 0.00 0.00 0.00 0.00 0.00 00:09:23.212 00:09:23.212 00:09:23.212 Latency(us) 00:09:23.212 [2024-12-05T09:41:10.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.212 Nvme0n1 : 10.02 6165.36 24.08 0.00 0.00 20746.47 6047.19 106764.10 00:09:23.212 [2024-12-05T09:41:10.841Z] =================================================================================================================== 00:09:23.212 [2024-12-05T09:41:10.841Z] Total : 6165.36 24.08 0.00 0.00 20746.47 6047.19 106764.10 00:09:23.212 { 00:09:23.212 "results": [ 00:09:23.212 { 00:09:23.212 "job": "Nvme0n1", 00:09:23.212 "core_mask": "0x2", 00:09:23.212 "workload": "randwrite", 00:09:23.212 "status": "finished", 00:09:23.212 "queue_depth": 128, 00:09:23.212 "io_size": 4096, 00:09:23.212 "runtime": 10.020824, 00:09:23.212 "iops": 6165.361251729399, 00:09:23.212 "mibps": 24.083442389567963, 00:09:23.212 "io_failed": 0, 00:09:23.212 "io_timeout": 0, 00:09:23.212 "avg_latency_us": 20746.46655707311, 00:09:23.212 "min_latency_us": 6047.185454545454, 00:09:23.212 "max_latency_us": 106764.10181818182 00:09:23.212 } 00:09:23.212 ], 00:09:23.212 "core_count": 1 00:09:23.212 } 00:09:23.212 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68117 00:09:23.212 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 68117 ']' 00:09:23.212 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 68117 00:09:23.212 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:23.212 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.212 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68117 00:09:23.212 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:23.212 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:23.212 killing process with pid 68117 00:09:23.212 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68117' 00:09:23.212 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 68117 00:09:23.212 Received shutdown signal, test time was about 10.000000 seconds 00:09:23.212 00:09:23.212 Latency(us) 00:09:23.212 [2024-12-05T09:41:10.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.212 [2024-12-05T09:41:10.841Z] =================================================================================================================== 00:09:23.212 [2024-12-05T09:41:10.841Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:23.212 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 68117 00:09:24.148 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:24.407 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:24.666 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb5e6f2c-1b7f-4460-996b-b76040a94c6b 00:09:24.666 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:24.924 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:24.924 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:24.924 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 67491 00:09:24.924 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 67491 00:09:24.924 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 67491 Killed "${NVMF_APP[@]}" "$@" 00:09:25.183 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:25.183 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:25.183 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:25.183 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:25.183 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:25.183 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=68336 00:09:25.183 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:25.183 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 68336 00:09:25.183 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 68336 ']' 00:09:25.183 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.183 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.183 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.183 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.183 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:25.183 [2024-12-05 09:41:12.699863] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:25.183 [2024-12-05 09:41:12.700091] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.442 [2024-12-05 09:41:12.896502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.442 [2024-12-05 09:41:13.009529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.442 [2024-12-05 09:41:13.009582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.442 [2024-12-05 09:41:13.009601] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.442 [2024-12-05 09:41:13.009626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.442 [2024-12-05 09:41:13.009639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.442 [2024-12-05 09:41:13.010813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.010 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.010 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:26.010 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:26.010 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:26.010 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:26.269 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.269 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:26.528 [2024-12-05 09:41:13.905251] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:26.528 [2024-12-05 09:41:13.905537] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:26.528 [2024-12-05 09:41:13.905767] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:26.528 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:26.528 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b397f356-0ba3-4f97-858c-7ee7461221d5 00:09:26.528 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b397f356-0ba3-4f97-858c-7ee7461221d5 00:09:26.528 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.528 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:26.528 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.528 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.528 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:26.787 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b397f356-0ba3-4f97-858c-7ee7461221d5 -t 2000 00:09:27.044 [ 00:09:27.044 { 00:09:27.044 "aliases": [ 00:09:27.044 "lvs/lvol" 00:09:27.044 ], 00:09:27.044 "assigned_rate_limits": { 00:09:27.044 "r_mbytes_per_sec": 0, 00:09:27.044 "rw_ios_per_sec": 0, 00:09:27.044 "rw_mbytes_per_sec": 0, 00:09:27.044 "w_mbytes_per_sec": 0 00:09:27.044 }, 00:09:27.044 "block_size": 4096, 00:09:27.044 "claimed": false, 00:09:27.044 "driver_specific": { 00:09:27.044 "lvol": { 00:09:27.044 "base_bdev": "aio_bdev", 00:09:27.044 "clone": false, 00:09:27.044 "esnap_clone": false, 00:09:27.044 "lvol_store_uuid": "fb5e6f2c-1b7f-4460-996b-b76040a94c6b", 00:09:27.044 "num_allocated_clusters": 38, 00:09:27.044 "snapshot": false, 00:09:27.044 "thin_provision": false 00:09:27.044 } 00:09:27.045 }, 00:09:27.045 "name": "b397f356-0ba3-4f97-858c-7ee7461221d5", 00:09:27.045 "num_blocks": 38912, 00:09:27.045 "product_name": "Logical Volume", 00:09:27.045 "supported_io_types": { 00:09:27.045 "abort": false, 00:09:27.045 "compare": false, 00:09:27.045 "compare_and_write": false, 00:09:27.045 "copy": false, 00:09:27.045 "flush": false, 00:09:27.045 "get_zone_info": false, 00:09:27.045 "nvme_admin": false, 00:09:27.045 "nvme_io": false, 00:09:27.045 "nvme_io_md": false, 00:09:27.045 "nvme_iov_md": false, 00:09:27.045 "read": true, 00:09:27.045 "reset": true, 00:09:27.045 "seek_data": true, 00:09:27.045 "seek_hole": true, 00:09:27.045 "unmap": true, 00:09:27.045 "write": true, 00:09:27.045 "write_zeroes": true, 00:09:27.045 "zcopy": false, 00:09:27.045 "zone_append": false, 00:09:27.045 "zone_management": false 00:09:27.045 }, 00:09:27.045 "uuid": "b397f356-0ba3-4f97-858c-7ee7461221d5", 00:09:27.045 "zoned": false 00:09:27.045 } 00:09:27.045 ] 00:09:27.045 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:27.045 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb5e6f2c-1b7f-4460-996b-b76040a94c6b 00:09:27.045 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:27.302 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:27.302 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb5e6f2c-1b7f-4460-996b-b76040a94c6b 00:09:27.302 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:27.560 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:27.560 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:27.818 [2024-12-05 09:41:15.242684] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:27.818 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb5e6f2c-1b7f-4460-996b-b76040a94c6b 00:09:27.818 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:27.818 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb5e6f2c-1b7f-4460-996b-b76040a94c6b 00:09:27.818 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.818 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:27.818 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.818 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:27.818 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.818 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:27.818 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.818 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:27.818 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb5e6f2c-1b7f-4460-996b-b76040a94c6b 00:09:28.075 2024/12/05 09:41:15 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:fb5e6f2c-1b7f-4460-996b-b76040a94c6b], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:09:28.075 request: 00:09:28.075 { 00:09:28.075 "method": "bdev_lvol_get_lvstores", 00:09:28.075 "params": { 00:09:28.075 "uuid": "fb5e6f2c-1b7f-4460-996b-b76040a94c6b" 00:09:28.075 } 00:09:28.075 } 00:09:28.075 Got JSON-RPC error response 00:09:28.075 GoRPCClient: error on JSON-RPC call 00:09:28.075 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:28.075 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:28.075 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:28.075 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:28.075 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:28.332 aio_bdev 00:09:28.332 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b397f356-0ba3-4f97-858c-7ee7461221d5 00:09:28.332 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b397f356-0ba3-4f97-858c-7ee7461221d5 00:09:28.332 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.332 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:28.332 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.332 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.332 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:28.590 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b397f356-0ba3-4f97-858c-7ee7461221d5 -t 2000 00:09:28.847 [ 00:09:28.847 { 00:09:28.847 "aliases": [ 00:09:28.847 "lvs/lvol" 00:09:28.847 ], 00:09:28.847 "assigned_rate_limits": { 00:09:28.847 "r_mbytes_per_sec": 0, 00:09:28.847 "rw_ios_per_sec": 0, 00:09:28.847 "rw_mbytes_per_sec": 0, 00:09:28.847 "w_mbytes_per_sec": 0 00:09:28.847 }, 00:09:28.847 "block_size": 4096, 00:09:28.847 "claimed": false, 00:09:28.847 "driver_specific": { 00:09:28.847 "lvol": { 00:09:28.847 "base_bdev": "aio_bdev", 00:09:28.847 "clone": false, 00:09:28.847 "esnap_clone": false, 00:09:28.847 "lvol_store_uuid": "fb5e6f2c-1b7f-4460-996b-b76040a94c6b", 00:09:28.848 "num_allocated_clusters": 38, 00:09:28.848 "snapshot": false, 00:09:28.848 "thin_provision": false 00:09:28.848 } 00:09:28.848 }, 00:09:28.848 "name": "b397f356-0ba3-4f97-858c-7ee7461221d5", 00:09:28.848 "num_blocks": 38912, 00:09:28.848 "product_name": "Logical Volume", 00:09:28.848 "supported_io_types": { 00:09:28.848 "abort": false, 00:09:28.848 "compare": false, 00:09:28.848 "compare_and_write": false, 00:09:28.848 "copy": false, 00:09:28.848 "flush": false, 00:09:28.848 "get_zone_info": false, 00:09:28.848 "nvme_admin": false, 00:09:28.848 "nvme_io": false, 00:09:28.848 "nvme_io_md": false, 00:09:28.848 "nvme_iov_md": false, 00:09:28.848 "read": true, 00:09:28.848 "reset": true, 00:09:28.848 "seek_data": true, 00:09:28.848 "seek_hole": true, 00:09:28.848 "unmap": true, 00:09:28.848 "write": true, 00:09:28.848 "write_zeroes": true, 00:09:28.848 "zcopy": false, 00:09:28.848 "zone_append": false, 00:09:28.848 "zone_management": false 00:09:28.848 }, 00:09:28.848 "uuid": "b397f356-0ba3-4f97-858c-7ee7461221d5", 00:09:28.848 "zoned": false 00:09:28.848 } 00:09:28.848 ] 00:09:28.848 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:28.848 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb5e6f2c-1b7f-4460-996b-b76040a94c6b 00:09:28.848 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:29.105 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:29.105 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:29.105 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb5e6f2c-1b7f-4460-996b-b76040a94c6b 00:09:29.363 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:29.363 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b397f356-0ba3-4f97-858c-7ee7461221d5 00:09:29.621 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fb5e6f2c-1b7f-4460-996b-b76040a94c6b 00:09:29.880 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:30.140 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:30.399 ************************************ 00:09:30.399 END TEST lvs_grow_dirty 00:09:30.399 ************************************ 00:09:30.399 00:09:30.399 real 0m21.586s 00:09:30.399 user 0m44.037s 00:09:30.399 sys 0m9.602s 00:09:30.399 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.399 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:30.399 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:30.399 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:30.399 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:30.399 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:30.399 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:30.399 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:30.399 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:30.399 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:30.399 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:30.399 nvmf_trace.0 00:09:30.658 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:30.658 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:30.658 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:30.658 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:30.916 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.916 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:30.916 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.916 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.916 rmmod nvme_tcp 00:09:30.916 rmmod nvme_fabrics 00:09:30.916 rmmod nvme_keyring 00:09:30.916 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.916 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:30.916 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:30.916 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 68336 ']' 00:09:30.916 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 68336 00:09:30.916 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 68336 ']' 00:09:30.916 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 68336 00:09:30.916 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:30.916 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.916 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68336 00:09:30.916 killing process with pid 68336 00:09:30.917 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.917 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.917 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68336' 00:09:30.917 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 68336 00:09:30.917 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 68336 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.853 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:32.113 00:09:32.113 real 0m44.282s 00:09:32.113 user 1m9.886s 00:09:32.113 sys 0m12.807s 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:32.113 ************************************ 00:09:32.113 END TEST nvmf_lvs_grow 00:09:32.113 ************************************ 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:32.113 ************************************ 00:09:32.113 START TEST nvmf_bdev_io_wait 00:09:32.113 ************************************ 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:32.113 * Looking for test storage... 00:09:32.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.113 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:32.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.113 --rc genhtml_branch_coverage=1 00:09:32.113 --rc genhtml_function_coverage=1 00:09:32.114 --rc genhtml_legend=1 00:09:32.114 --rc geninfo_all_blocks=1 00:09:32.114 --rc geninfo_unexecuted_blocks=1 00:09:32.114 00:09:32.114 ' 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:32.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.114 --rc genhtml_branch_coverage=1 00:09:32.114 --rc genhtml_function_coverage=1 00:09:32.114 --rc genhtml_legend=1 00:09:32.114 --rc geninfo_all_blocks=1 00:09:32.114 --rc geninfo_unexecuted_blocks=1 00:09:32.114 00:09:32.114 ' 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:32.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.114 --rc genhtml_branch_coverage=1 00:09:32.114 --rc genhtml_function_coverage=1 00:09:32.114 --rc genhtml_legend=1 00:09:32.114 --rc geninfo_all_blocks=1 00:09:32.114 --rc geninfo_unexecuted_blocks=1 00:09:32.114 00:09:32.114 ' 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:32.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.114 --rc genhtml_branch_coverage=1 00:09:32.114 --rc genhtml_function_coverage=1 00:09:32.114 --rc genhtml_legend=1 00:09:32.114 --rc geninfo_all_blocks=1 00:09:32.114 --rc geninfo_unexecuted_blocks=1 00:09:32.114 00:09:32.114 ' 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.114 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:32.374 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:32.374 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:32.375 Cannot find device "nvmf_init_br" 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:32.375 Cannot find device "nvmf_init_br2" 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:32.375 Cannot find device "nvmf_tgt_br" 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:32.375 Cannot find device "nvmf_tgt_br2" 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:32.375 Cannot find device "nvmf_init_br" 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:32.375 Cannot find device "nvmf_init_br2" 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:32.375 Cannot find device "nvmf_tgt_br" 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:32.375 Cannot find device "nvmf_tgt_br2" 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:32.375 Cannot find device "nvmf_br" 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:32.375 Cannot find device "nvmf_init_if" 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:32.375 Cannot find device "nvmf_init_if2" 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:32.375 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:32.375 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:32.375 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:32.635 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:32.635 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:32.635 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:32.635 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:32.635 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:32.635 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:32.635 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:32.635 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:32.635 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:32.635 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:32.635 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:32.635 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:32.635 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:32.635 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:32.636 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:32.636 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:09:32.636 00:09:32.636 --- 10.0.0.3 ping statistics --- 00:09:32.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.636 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:32.636 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:32.636 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:09:32.636 00:09:32.636 --- 10.0.0.4 ping statistics --- 00:09:32.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.636 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:32.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:09:32.636 00:09:32.636 --- 10.0.0.1 ping statistics --- 00:09:32.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.636 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:32.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:09:32.636 00:09:32.636 --- 10.0.0.2 ping statistics --- 00:09:32.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.636 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=68823 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 68823 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 68823 ']' 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.636 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.896 [2024-12-05 09:41:20.284030] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:32.896 [2024-12-05 09:41:20.284203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.896 [2024-12-05 09:41:20.467512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.155 [2024-12-05 09:41:20.582516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.155 [2024-12-05 09:41:20.582567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.155 [2024-12-05 09:41:20.582583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.155 [2024-12-05 09:41:20.582593] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.155 [2024-12-05 09:41:20.582605] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.155 [2024-12-05 09:41:20.584643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.155 [2024-12-05 09:41:20.584726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.155 [2024-12-05 09:41:20.584851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.155 [2024-12-05 09:41:20.584876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.725 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.725 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:33.725 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:33.725 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:33.725 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.725 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.725 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:33.725 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.725 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.725 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.725 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:33.725 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.725 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.985 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.985 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:33.985 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.985 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.985 [2024-12-05 09:41:21.374809] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.985 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.985 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:33.985 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.985 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.985 Malloc0 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.986 [2024-12-05 09:41:21.512250] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=68877 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:33.986 { 00:09:33.986 "params": { 00:09:33.986 "name": "Nvme$subsystem", 00:09:33.986 "trtype": "$TEST_TRANSPORT", 00:09:33.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.986 "adrfam": "ipv4", 00:09:33.986 "trsvcid": "$NVMF_PORT", 00:09:33.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.986 "hdgst": ${hdgst:-false}, 00:09:33.986 "ddgst": ${ddgst:-false} 00:09:33.986 }, 00:09:33.986 "method": "bdev_nvme_attach_controller" 00:09:33.986 } 00:09:33.986 EOF 00:09:33.986 )") 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=68879 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=68882 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:33.986 { 00:09:33.986 "params": { 00:09:33.986 "name": "Nvme$subsystem", 00:09:33.986 "trtype": "$TEST_TRANSPORT", 00:09:33.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.986 "adrfam": "ipv4", 00:09:33.986 "trsvcid": "$NVMF_PORT", 00:09:33.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.986 "hdgst": ${hdgst:-false}, 00:09:33.986 "ddgst": ${ddgst:-false} 00:09:33.986 }, 00:09:33.986 "method": "bdev_nvme_attach_controller" 00:09:33.986 } 00:09:33.986 EOF 00:09:33.986 )") 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=68885 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:33.986 { 00:09:33.986 "params": { 00:09:33.986 "name": "Nvme$subsystem", 00:09:33.986 "trtype": "$TEST_TRANSPORT", 00:09:33.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.986 "adrfam": "ipv4", 00:09:33.986 "trsvcid": "$NVMF_PORT", 00:09:33.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.986 "hdgst": ${hdgst:-false}, 00:09:33.986 "ddgst": ${ddgst:-false} 00:09:33.986 }, 00:09:33.986 "method": "bdev_nvme_attach_controller" 00:09:33.986 } 00:09:33.986 EOF 00:09:33.986 )") 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:33.986 "params": { 00:09:33.986 "name": "Nvme1", 00:09:33.986 "trtype": "tcp", 00:09:33.986 "traddr": "10.0.0.3", 00:09:33.986 "adrfam": "ipv4", 00:09:33.986 "trsvcid": "4420", 00:09:33.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.986 "hdgst": false, 00:09:33.986 "ddgst": false 00:09:33.986 }, 00:09:33.986 "method": "bdev_nvme_attach_controller" 00:09:33.986 }' 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:33.986 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:33.986 "params": { 00:09:33.986 "name": "Nvme1", 00:09:33.986 "trtype": "tcp", 00:09:33.986 "traddr": "10.0.0.3", 00:09:33.986 "adrfam": "ipv4", 00:09:33.986 "trsvcid": "4420", 00:09:33.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.987 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.987 "hdgst": false, 00:09:33.987 "ddgst": false 00:09:33.987 }, 00:09:33.987 "method": "bdev_nvme_attach_controller" 00:09:33.987 }' 00:09:33.987 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:33.987 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:33.987 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:33.987 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:33.987 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:33.987 { 00:09:33.987 "params": { 00:09:33.987 "name": "Nvme$subsystem", 00:09:33.987 "trtype": "$TEST_TRANSPORT", 00:09:33.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.987 "adrfam": "ipv4", 00:09:33.987 "trsvcid": "$NVMF_PORT", 00:09:33.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.987 "hdgst": ${hdgst:-false}, 00:09:33.987 "ddgst": ${ddgst:-false} 00:09:33.987 }, 00:09:33.987 "method": "bdev_nvme_attach_controller" 00:09:33.987 } 00:09:33.987 EOF 00:09:33.987 )") 00:09:33.987 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:33.987 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:33.987 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:33.987 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:33.987 "params": { 00:09:33.987 "name": "Nvme1", 00:09:33.987 "trtype": "tcp", 00:09:33.987 "traddr": "10.0.0.3", 00:09:33.987 "adrfam": "ipv4", 00:09:33.987 "trsvcid": "4420", 00:09:33.987 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.987 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.987 "hdgst": false, 00:09:33.987 "ddgst": false 00:09:33.987 }, 00:09:33.987 "method": "bdev_nvme_attach_controller" 00:09:33.987 }' 00:09:33.987 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:33.987 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:33.987 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:33.987 "params": { 00:09:33.987 "name": "Nvme1", 00:09:33.987 "trtype": "tcp", 00:09:33.987 "traddr": "10.0.0.3", 00:09:33.987 "adrfam": "ipv4", 00:09:33.987 "trsvcid": "4420", 00:09:33.987 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.987 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.987 "hdgst": false, 00:09:33.987 "ddgst": false 00:09:33.987 }, 00:09:33.987 "method": "bdev_nvme_attach_controller" 00:09:33.987 }' 00:09:33.987 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 68877 00:09:34.246 [2024-12-05 09:41:21.642396] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:34.246 [2024-12-05 09:41:21.642551] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:34.246 [2024-12-05 09:41:21.644348] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:34.246 [2024-12-05 09:41:21.644509] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:34.246 [2024-12-05 09:41:21.651201] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:34.246 [2024-12-05 09:41:21.651368] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:34.246 [2024-12-05 09:41:21.656810] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:34.246 [2024-12-05 09:41:21.657012] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:34.505 [2024-12-05 09:41:21.883643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.505 [2024-12-05 09:41:21.973191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.505 [2024-12-05 09:41:21.995264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:34.505 [2024-12-05 09:41:22.036090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.505 [2024-12-05 09:41:22.114130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:34.765 [2024-12-05 09:41:22.137889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.765 [2024-12-05 09:41:22.152408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:34.765 [2024-12-05 09:41:22.282479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:34.765 Running I/O for 1 seconds... 00:09:35.024 Running I/O for 1 seconds... 00:09:35.024 Running I/O for 1 seconds... 00:09:35.283 Running I/O for 1 seconds... 00:09:35.850 5674.00 IOPS, 22.16 MiB/s 00:09:35.850 Latency(us) 00:09:35.850 [2024-12-05T09:41:23.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.850 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:35.850 Nvme1n1 : 1.02 5717.89 22.34 0.00 0.00 22193.13 4379.00 39083.29 00:09:35.850 [2024-12-05T09:41:23.479Z] =================================================================================================================== 00:09:35.850 [2024-12-05T09:41:23.479Z] Total : 5717.89 22.34 0.00 0.00 22193.13 4379.00 39083.29 00:09:36.109 5485.00 IOPS, 21.43 MiB/s 00:09:36.109 Latency(us) 00:09:36.109 [2024-12-05T09:41:23.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.109 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:36.109 Nvme1n1 : 1.01 5572.85 21.77 0.00 0.00 22863.79 8281.37 39798.23 00:09:36.109 [2024-12-05T09:41:23.738Z] =================================================================================================================== 00:09:36.109 [2024-12-05T09:41:23.738Z] Total : 5572.85 21.77 0.00 0.00 22863.79 8281.37 39798.23 00:09:36.109 182280.00 IOPS, 712.03 MiB/s 00:09:36.109 Latency(us) 00:09:36.109 [2024-12-05T09:41:23.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.109 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:36.109 Nvme1n1 : 1.00 181949.39 710.74 0.00 0.00 699.83 355.61 3187.43 00:09:36.109 [2024-12-05T09:41:23.738Z] =================================================================================================================== 00:09:36.109 [2024-12-05T09:41:23.738Z] Total : 181949.39 710.74 0.00 0.00 699.83 355.61 3187.43 00:09:36.109 6500.00 IOPS, 25.39 MiB/s 00:09:36.109 Latency(us) 00:09:36.109 [2024-12-05T09:41:23.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.109 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:36.109 Nvme1n1 : 1.01 6540.84 25.55 0.00 0.00 19427.65 11617.75 32410.53 00:09:36.109 [2024-12-05T09:41:23.738Z] =================================================================================================================== 00:09:36.109 [2024-12-05T09:41:23.738Z] Total : 6540.84 25.55 0.00 0.00 19427.65 11617.75 32410.53 00:09:36.676 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 68879 00:09:36.676 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 68882 00:09:36.676 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 68885 00:09:36.676 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.676 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.676 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:36.936 rmmod nvme_tcp 00:09:36.936 rmmod nvme_fabrics 00:09:36.936 rmmod nvme_keyring 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 68823 ']' 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 68823 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 68823 ']' 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 68823 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68823 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.936 killing process with pid 68823 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68823' 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 68823 00:09:36.936 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 68823 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:37.873 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:38.140 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:38.140 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:38.140 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:38.140 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.140 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.140 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.140 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:38.140 00:09:38.140 real 0m6.030s 00:09:38.140 user 0m25.853s 00:09:38.140 sys 0m2.623s 00:09:38.140 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.140 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:38.140 ************************************ 00:09:38.140 END TEST nvmf_bdev_io_wait 00:09:38.140 ************************************ 00:09:38.140 09:41:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:38.140 09:41:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:38.140 09:41:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.140 09:41:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:38.140 ************************************ 00:09:38.140 START TEST nvmf_queue_depth 00:09:38.141 ************************************ 00:09:38.141 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:38.141 * Looking for test storage... 00:09:38.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:38.141 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:38.141 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:38.141 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:38.417 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:38.417 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.417 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.417 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.417 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.417 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.417 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.417 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.417 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.417 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.417 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.417 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.417 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:38.417 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:38.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.418 --rc genhtml_branch_coverage=1 00:09:38.418 --rc genhtml_function_coverage=1 00:09:38.418 --rc genhtml_legend=1 00:09:38.418 --rc geninfo_all_blocks=1 00:09:38.418 --rc geninfo_unexecuted_blocks=1 00:09:38.418 00:09:38.418 ' 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:38.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.418 --rc genhtml_branch_coverage=1 00:09:38.418 --rc genhtml_function_coverage=1 00:09:38.418 --rc genhtml_legend=1 00:09:38.418 --rc geninfo_all_blocks=1 00:09:38.418 --rc geninfo_unexecuted_blocks=1 00:09:38.418 00:09:38.418 ' 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:38.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.418 --rc genhtml_branch_coverage=1 00:09:38.418 --rc genhtml_function_coverage=1 00:09:38.418 --rc genhtml_legend=1 00:09:38.418 --rc geninfo_all_blocks=1 00:09:38.418 --rc geninfo_unexecuted_blocks=1 00:09:38.418 00:09:38.418 ' 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:38.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.418 --rc genhtml_branch_coverage=1 00:09:38.418 --rc genhtml_function_coverage=1 00:09:38.418 --rc genhtml_legend=1 00:09:38.418 --rc geninfo_all_blocks=1 00:09:38.418 --rc geninfo_unexecuted_blocks=1 00:09:38.418 00:09:38.418 ' 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:38.418 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:38.418 Cannot find device "nvmf_init_br" 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:38.418 Cannot find device "nvmf_init_br2" 00:09:38.418 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:38.419 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:38.419 Cannot find device "nvmf_tgt_br" 00:09:38.419 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:38.419 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:38.419 Cannot find device "nvmf_tgt_br2" 00:09:38.419 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:38.419 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:38.419 Cannot find device "nvmf_init_br" 00:09:38.419 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:38.419 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:38.419 Cannot find device "nvmf_init_br2" 00:09:38.419 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:38.419 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:38.419 Cannot find device "nvmf_tgt_br" 00:09:38.419 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:38.419 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:38.419 Cannot find device "nvmf_tgt_br2" 00:09:38.419 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:38.419 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:38.419 Cannot find device "nvmf_br" 00:09:38.419 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:38.419 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:38.419 Cannot find device "nvmf_init_if" 00:09:38.419 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:38.419 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:38.419 Cannot find device "nvmf_init_if2" 00:09:38.419 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:38.419 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:38.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:38.419 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:38.419 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:38.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:38.419 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:38.419 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:38.419 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:38.419 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:38.419 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:38.419 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:38.730 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:38.731 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:38.731 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:38.731 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:38.731 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:09:38.731 00:09:38.731 --- 10.0.0.3 ping statistics --- 00:09:38.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.731 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:09:38.731 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:38.731 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:38.731 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:09:38.731 00:09:38.731 --- 10.0.0.4 ping statistics --- 00:09:38.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.731 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:38.731 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:38.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:09:38.731 00:09:38.731 --- 10.0.0.1 ping statistics --- 00:09:38.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.731 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:38.731 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:38.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:09:38.731 00:09:38.731 --- 10.0.0.2 ping statistics --- 00:09:38.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.731 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:38.731 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.731 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:09:38.731 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:38.731 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.731 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:38.731 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:38.731 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.731 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:38.731 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:39.000 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:39.000 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:39.000 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:39.000 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.000 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=69194 00:09:39.000 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 69194 00:09:39.000 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 69194 ']' 00:09:39.000 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.000 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:39.000 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.000 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.000 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.000 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.000 [2024-12-05 09:41:26.457353] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:39.000 [2024-12-05 09:41:26.457520] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.258 [2024-12-05 09:41:26.638182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.258 [2024-12-05 09:41:26.801356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.258 [2024-12-05 09:41:26.801441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.258 [2024-12-05 09:41:26.801483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.258 [2024-12-05 09:41:26.801522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.258 [2024-12-05 09:41:26.801542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.258 [2024-12-05 09:41:26.803195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.825 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.825 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:39.825 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:39.825 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:39.825 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.083 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.084 [2024-12-05 09:41:27.490886] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.084 Malloc0 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.084 [2024-12-05 09:41:27.596735] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=69244 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 69244 /var/tmp/bdevperf.sock 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 69244 ']' 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.084 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.343 [2024-12-05 09:41:27.721109] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:40.343 [2024-12-05 09:41:27.721283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69244 ] 00:09:40.343 [2024-12-05 09:41:27.910114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.605 [2024-12-05 09:41:28.073747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.171 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.171 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:41.171 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:41.171 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.171 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.171 NVMe0n1 00:09:41.171 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.171 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:41.430 Running I/O for 10 seconds... 00:09:43.296 7821.00 IOPS, 30.55 MiB/s [2024-12-05T09:41:32.302Z] 7856.00 IOPS, 30.69 MiB/s [2024-12-05T09:41:33.238Z] 7930.00 IOPS, 30.98 MiB/s [2024-12-05T09:41:34.175Z] 7957.25 IOPS, 31.08 MiB/s [2024-12-05T09:41:35.112Z] 8084.40 IOPS, 31.58 MiB/s [2024-12-05T09:41:36.045Z] 8111.67 IOPS, 31.69 MiB/s [2024-12-05T09:41:36.981Z] 8127.14 IOPS, 31.75 MiB/s [2024-12-05T09:41:37.918Z] 8145.50 IOPS, 31.82 MiB/s [2024-12-05T09:41:39.296Z] 8167.33 IOPS, 31.90 MiB/s [2024-12-05T09:41:39.296Z] 8183.20 IOPS, 31.97 MiB/s 00:09:51.667 Latency(us) 00:09:51.667 [2024-12-05T09:41:39.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.667 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:51.667 Verification LBA range: start 0x0 length 0x4000 00:09:51.667 NVMe0n1 : 10.10 8194.00 32.01 0.00 0.00 124419.94 21805.61 85315.96 00:09:51.667 [2024-12-05T09:41:39.296Z] =================================================================================================================== 00:09:51.667 [2024-12-05T09:41:39.296Z] Total : 8194.00 32.01 0.00 0.00 124419.94 21805.61 85315.96 00:09:51.667 { 00:09:51.667 "results": [ 00:09:51.668 { 00:09:51.668 "job": "NVMe0n1", 00:09:51.668 "core_mask": "0x1", 00:09:51.668 "workload": "verify", 00:09:51.668 "status": "finished", 00:09:51.668 "verify_range": { 00:09:51.668 "start": 0, 00:09:51.668 "length": 16384 00:09:51.668 }, 00:09:51.668 "queue_depth": 1024, 00:09:51.668 "io_size": 4096, 00:09:51.668 "runtime": 10.10105, 00:09:51.668 "iops": 8193.999633701447, 00:09:51.668 "mibps": 32.00781106914628, 00:09:51.668 "io_failed": 0, 00:09:51.668 "io_timeout": 0, 00:09:51.668 "avg_latency_us": 124419.93994270953, 00:09:51.668 "min_latency_us": 21805.614545454544, 00:09:51.668 "max_latency_us": 85315.95636363636 00:09:51.668 } 00:09:51.668 ], 00:09:51.668 "core_count": 1 00:09:51.668 } 00:09:51.668 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 69244 00:09:51.668 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 69244 ']' 00:09:51.668 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 69244 00:09:51.668 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:51.668 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.668 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69244 00:09:51.668 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.668 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.668 killing process with pid 69244 00:09:51.668 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69244' 00:09:51.668 Received shutdown signal, test time was about 10.000000 seconds 00:09:51.668 00:09:51.668 Latency(us) 00:09:51.668 [2024-12-05T09:41:39.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.668 [2024-12-05T09:41:39.297Z] =================================================================================================================== 00:09:51.668 [2024-12-05T09:41:39.297Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:51.668 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 69244 00:09:51.668 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 69244 00:09:52.604 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:52.604 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:52.604 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.604 09:41:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:52.604 09:41:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.604 09:41:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:52.604 09:41:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.604 09:41:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.604 rmmod nvme_tcp 00:09:52.604 rmmod nvme_fabrics 00:09:52.604 rmmod nvme_keyring 00:09:52.604 09:41:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.604 09:41:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:52.604 09:41:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:52.604 09:41:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 69194 ']' 00:09:52.604 09:41:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 69194 00:09:52.604 09:41:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 69194 ']' 00:09:52.604 09:41:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 69194 00:09:52.604 09:41:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:52.604 09:41:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.604 09:41:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69194 00:09:52.604 09:41:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:52.604 09:41:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:52.604 killing process with pid 69194 00:09:52.604 09:41:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69194' 00:09:52.604 09:41:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 69194 00:09:52.604 09:41:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 69194 00:09:53.537 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:53.537 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:53.537 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:53.537 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:53.537 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:53.537 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:53.537 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:53.537 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:53.537 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:53.537 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:53.537 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:53.795 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:53.795 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:53.795 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:53.795 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:53.795 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:53.795 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:53.795 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:53.795 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:53.795 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:53.795 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:53.795 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:53.795 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:53.795 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.795 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.796 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.796 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:53.796 00:09:53.796 real 0m15.740s 00:09:53.796 user 0m25.737s 00:09:53.796 sys 0m2.707s 00:09:53.796 ************************************ 00:09:53.796 END TEST nvmf_queue_depth 00:09:53.796 ************************************ 00:09:53.796 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.796 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:54.054 ************************************ 00:09:54.054 START TEST nvmf_target_multipath 00:09:54.054 ************************************ 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:54.054 * Looking for test storage... 00:09:54.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:54.054 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:54.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.055 --rc genhtml_branch_coverage=1 00:09:54.055 --rc genhtml_function_coverage=1 00:09:54.055 --rc genhtml_legend=1 00:09:54.055 --rc geninfo_all_blocks=1 00:09:54.055 --rc geninfo_unexecuted_blocks=1 00:09:54.055 00:09:54.055 ' 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:54.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.055 --rc genhtml_branch_coverage=1 00:09:54.055 --rc genhtml_function_coverage=1 00:09:54.055 --rc genhtml_legend=1 00:09:54.055 --rc geninfo_all_blocks=1 00:09:54.055 --rc geninfo_unexecuted_blocks=1 00:09:54.055 00:09:54.055 ' 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:54.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.055 --rc genhtml_branch_coverage=1 00:09:54.055 --rc genhtml_function_coverage=1 00:09:54.055 --rc genhtml_legend=1 00:09:54.055 --rc geninfo_all_blocks=1 00:09:54.055 --rc geninfo_unexecuted_blocks=1 00:09:54.055 00:09:54.055 ' 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:54.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.055 --rc genhtml_branch_coverage=1 00:09:54.055 --rc genhtml_function_coverage=1 00:09:54.055 --rc genhtml_legend=1 00:09:54.055 --rc geninfo_all_blocks=1 00:09:54.055 --rc geninfo_unexecuted_blocks=1 00:09:54.055 00:09:54.055 ' 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:54.055 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:54.055 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:54.314 Cannot find device "nvmf_init_br" 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:54.314 Cannot find device "nvmf_init_br2" 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:54.314 Cannot find device "nvmf_tgt_br" 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:54.314 Cannot find device "nvmf_tgt_br2" 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:54.314 Cannot find device "nvmf_init_br" 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:54.314 Cannot find device "nvmf_init_br2" 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:54.314 Cannot find device "nvmf_tgt_br" 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:54.314 Cannot find device "nvmf_tgt_br2" 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:54.314 Cannot find device "nvmf_br" 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:54.314 Cannot find device "nvmf_init_if" 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:54.314 Cannot find device "nvmf_init_if2" 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:54.314 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:54.314 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:54.314 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:54.572 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:54.572 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:54.572 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:54.573 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:54.573 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:54.573 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:54.573 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:09:54.573 00:09:54.573 --- 10.0.0.3 ping statistics --- 00:09:54.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.573 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:54.573 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:54.573 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:09:54.573 00:09:54.573 --- 10.0.0.4 ping statistics --- 00:09:54.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.573 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:54.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:54.573 00:09:54.573 --- 10.0.0.1 ping statistics --- 00:09:54.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.573 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:54.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:09:54.573 00:09:54.573 --- 10.0.0.2 ping statistics --- 00:09:54.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.573 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=69658 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 69658 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 69658 ']' 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.573 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:54.573 [2024-12-05 09:41:42.196461] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:54.573 [2024-12-05 09:41:42.196642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.832 [2024-12-05 09:41:42.392258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.091 [2024-12-05 09:41:42.565318] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.091 [2024-12-05 09:41:42.565401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.091 [2024-12-05 09:41:42.565426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.091 [2024-12-05 09:41:42.565444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.091 [2024-12-05 09:41:42.565476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.091 [2024-12-05 09:41:42.568423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.091 [2024-12-05 09:41:42.568575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.091 [2024-12-05 09:41:42.568973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.091 [2024-12-05 09:41:42.569493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.660 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.660 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:09:55.660 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:55.660 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:55.660 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:55.660 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.660 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:55.919 [2024-12-05 09:41:43.409252] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.919 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:56.179 Malloc0 00:09:56.179 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:56.438 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:56.696 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:56.955 [2024-12-05 09:41:44.457651] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:56.955 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:57.213 [2024-12-05 09:41:44.766106] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:57.213 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:57.471 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:57.729 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:57.729 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:09:57.729 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:57.729 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:57.729 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:09:59.631 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:59.631 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:59.631 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:59.631 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:59.631 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:59.631 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:09:59.890 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:59.890 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:59.890 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:59.890 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:59.890 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:59.890 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:59.890 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:59.890 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:59.890 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:59.890 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:59.891 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:59.891 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:59.891 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:59.891 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:59.891 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:59.891 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:59.891 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:59.891 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:59.891 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:59.891 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:59.891 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:59.891 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:59.891 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:59.891 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:59.891 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:59.891 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:59.891 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=69797 00:09:59.891 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:59.891 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:59.891 [global] 00:09:59.891 thread=1 00:09:59.891 invalidate=1 00:09:59.891 rw=randrw 00:09:59.891 time_based=1 00:09:59.891 runtime=6 00:09:59.891 ioengine=libaio 00:09:59.891 direct=1 00:09:59.891 bs=4096 00:09:59.891 iodepth=128 00:09:59.891 norandommap=0 00:09:59.891 numjobs=1 00:09:59.891 00:09:59.891 verify_dump=1 00:09:59.891 verify_backlog=512 00:09:59.891 verify_state_save=0 00:09:59.891 do_verify=1 00:09:59.891 verify=crc32c-intel 00:09:59.891 [job0] 00:09:59.891 filename=/dev/nvme0n1 00:09:59.891 Could not set queue depth (nvme0n1) 00:09:59.891 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.891 fio-3.35 00:09:59.891 Starting 1 thread 00:10:00.828 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:01.087 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:01.346 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:01.346 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:01.346 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:01.346 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:01.346 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:01.346 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:01.346 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:01.346 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:01.346 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:01.346 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:01.346 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:01.346 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:01.346 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:02.283 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:02.283 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:02.283 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:02.283 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:02.851 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:02.851 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:02.851 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:02.851 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:02.851 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:02.851 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:02.851 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:02.851 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:02.851 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:02.851 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:02.851 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:02.851 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:02.851 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:02.851 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:04.230 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:04.230 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:04.230 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:04.230 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 69797 00:10:06.186 00:10:06.186 job0: (groupid=0, jobs=1): err= 0: pid=69822: Thu Dec 5 09:41:53 2024 00:10:06.186 read: IOPS=9409, BW=36.8MiB/s (38.5MB/s)(221MiB/6007msec) 00:10:06.186 slat (usec): min=2, max=8434, avg=61.52, stdev=284.64 00:10:06.186 clat (usec): min=2549, max=21926, avg=9211.62, stdev=1542.36 00:10:06.186 lat (usec): min=2587, max=21959, avg=9273.14, stdev=1554.23 00:10:06.186 clat percentiles (usec): 00:10:06.186 | 1.00th=[ 5604], 5.00th=[ 6980], 10.00th=[ 7635], 20.00th=[ 8160], 00:10:06.186 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:10:06.186 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[11076], 95.00th=[11994], 00:10:06.186 | 99.00th=[13698], 99.50th=[14353], 99.90th=[16450], 99.95th=[20055], 00:10:06.186 | 99.99th=[21627] 00:10:06.186 bw ( KiB/s): min= 9752, max=24296, per=52.61%, avg=19800.00, stdev=5328.11, samples=12 00:10:06.186 iops : min= 2438, max= 6074, avg=4950.00, stdev=1332.03, samples=12 00:10:06.186 write: IOPS=5596, BW=21.9MiB/s (22.9MB/s)(117MiB/5333msec); 0 zone resets 00:10:06.186 slat (usec): min=4, max=4184, avg=73.73, stdev=201.37 00:10:06.186 clat (usec): min=1348, max=21356, avg=8082.19, stdev=1266.15 00:10:06.186 lat (usec): min=1389, max=21420, avg=8155.92, stdev=1272.62 00:10:06.186 clat percentiles (usec): 00:10:06.186 | 1.00th=[ 4359], 5.00th=[ 6063], 10.00th=[ 6783], 20.00th=[ 7308], 00:10:06.186 | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 8094], 60.00th=[ 8356], 00:10:06.186 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[ 9896], 00:10:06.186 | 99.00th=[11731], 99.50th=[12649], 99.90th=[14746], 99.95th=[16188], 00:10:06.186 | 99.99th=[20841] 00:10:06.186 bw ( KiB/s): min=10224, max=24376, per=88.73%, avg=19862.00, stdev=5086.42, samples=12 00:10:06.186 iops : min= 2556, max= 6094, avg=4965.50, stdev=1271.61, samples=12 00:10:06.186 lat (msec) : 2=0.01%, 4=0.17%, 10=80.68%, 20=19.11%, 50=0.04% 00:10:06.186 cpu : usr=5.09%, sys=19.53%, ctx=5465, majf=0, minf=102 00:10:06.186 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:06.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.186 issued rwts: total=56522,29844,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.186 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.186 00:10:06.186 Run status group 0 (all jobs): 00:10:06.186 READ: bw=36.8MiB/s (38.5MB/s), 36.8MiB/s-36.8MiB/s (38.5MB/s-38.5MB/s), io=221MiB (232MB), run=6007-6007msec 00:10:06.186 WRITE: bw=21.9MiB/s (22.9MB/s), 21.9MiB/s-21.9MiB/s (22.9MB/s-22.9MB/s), io=117MiB (122MB), run=5333-5333msec 00:10:06.186 00:10:06.186 Disk stats (read/write): 00:10:06.186 nvme0n1: ios=55965/29073, merge=0/0, ticks=485293/220441, in_queue=705734, util=98.70% 00:10:06.186 09:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:06.445 09:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:10:06.704 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:06.704 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:06.704 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:06.704 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:06.704 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:06.705 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:06.705 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:06.705 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:06.705 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:06.705 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:06.705 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:06.705 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:10:06.705 09:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:07.641 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:07.641 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:07.641 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:07.641 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:07.641 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=69955 00:10:07.641 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:07.641 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:07.641 [global] 00:10:07.641 thread=1 00:10:07.641 invalidate=1 00:10:07.641 rw=randrw 00:10:07.641 time_based=1 00:10:07.641 runtime=6 00:10:07.641 ioengine=libaio 00:10:07.641 direct=1 00:10:07.641 bs=4096 00:10:07.641 iodepth=128 00:10:07.641 norandommap=0 00:10:07.641 numjobs=1 00:10:07.641 00:10:07.641 verify_dump=1 00:10:07.641 verify_backlog=512 00:10:07.641 verify_state_save=0 00:10:07.641 do_verify=1 00:10:07.641 verify=crc32c-intel 00:10:07.641 [job0] 00:10:07.641 filename=/dev/nvme0n1 00:10:07.641 Could not set queue depth (nvme0n1) 00:10:07.898 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:07.898 fio-3.35 00:10:07.898 Starting 1 thread 00:10:08.831 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:09.088 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:09.346 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:09.346 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:09.346 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:09.346 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:09.346 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:09.346 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:09.346 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:09.346 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:09.346 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:09.346 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:09.346 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:09.346 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:09.346 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:10.283 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:10.283 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:10.283 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:10.283 09:41:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:10.541 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:10.799 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:10.799 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:10.799 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:10.799 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:10.799 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:10.799 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:10.799 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:10.799 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:10.799 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:10.799 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:10.799 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:10.799 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:10.799 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:11.731 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:11.731 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:11.731 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:11.731 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 69955 00:10:14.260 00:10:14.260 job0: (groupid=0, jobs=1): err= 0: pid=69976: Thu Dec 5 09:42:01 2024 00:10:14.260 read: IOPS=10.0k, BW=39.2MiB/s (41.1MB/s)(235MiB/6003msec) 00:10:14.260 slat (usec): min=7, max=7183, avg=49.66, stdev=242.87 00:10:14.260 clat (usec): min=392, max=23253, avg=8731.55, stdev=2576.59 00:10:14.260 lat (usec): min=418, max=23265, avg=8781.21, stdev=2585.06 00:10:14.260 clat percentiles (usec): 00:10:14.260 | 1.00th=[ 1876], 5.00th=[ 3523], 10.00th=[ 5604], 20.00th=[ 7570], 00:10:14.260 | 30.00th=[ 8029], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 8979], 00:10:14.260 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[11600], 95.00th=[13042], 00:10:14.260 | 99.00th=[16057], 99.50th=[17433], 99.90th=[20317], 99.95th=[21627], 00:10:14.260 | 99.99th=[22938] 00:10:14.260 bw ( KiB/s): min= 4080, max=29008, per=51.53%, avg=20693.09, stdev=7542.38, samples=11 00:10:14.260 iops : min= 1020, max= 7252, avg=5173.27, stdev=1885.60, samples=11 00:10:14.260 write: IOPS=5937, BW=23.2MiB/s (24.3MB/s)(124MiB/5346msec); 0 zone resets 00:10:14.260 slat (usec): min=14, max=2243, avg=60.37, stdev=162.54 00:10:14.260 clat (usec): min=432, max=21012, avg=7436.36, stdev=2223.45 00:10:14.260 lat (usec): min=481, max=21060, avg=7496.72, stdev=2228.79 00:10:14.260 clat percentiles (usec): 00:10:14.260 | 1.00th=[ 1745], 5.00th=[ 2835], 10.00th=[ 4080], 20.00th=[ 6456], 00:10:14.260 | 30.00th=[ 6980], 40.00th=[ 7308], 50.00th=[ 7570], 60.00th=[ 7898], 00:10:14.260 | 70.00th=[ 8225], 80.00th=[ 8717], 90.00th=[ 9765], 95.00th=[10814], 00:10:14.260 | 99.00th=[13435], 99.50th=[14484], 99.90th=[18220], 99.95th=[19530], 00:10:14.260 | 99.99th=[20841] 00:10:14.260 bw ( KiB/s): min= 4368, max=28320, per=87.52%, avg=20784.73, stdev=7322.40, samples=11 00:10:14.260 iops : min= 1092, max= 7080, avg=5196.18, stdev=1830.60, samples=11 00:10:14.260 lat (usec) : 500=0.02%, 750=0.05%, 1000=0.13% 00:10:14.260 lat (msec) : 2=1.16%, 4=6.07%, 10=74.72%, 20=17.76%, 50=0.09% 00:10:14.260 cpu : usr=5.03%, sys=20.83%, ctx=6209, majf=0, minf=127 00:10:14.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:14.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.260 issued rwts: total=60268,31740,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.260 00:10:14.260 Run status group 0 (all jobs): 00:10:14.260 READ: bw=39.2MiB/s (41.1MB/s), 39.2MiB/s-39.2MiB/s (41.1MB/s-41.1MB/s), io=235MiB (247MB), run=6003-6003msec 00:10:14.260 WRITE: bw=23.2MiB/s (24.3MB/s), 23.2MiB/s-23.2MiB/s (24.3MB/s-24.3MB/s), io=124MiB (130MB), run=5346-5346msec 00:10:14.260 00:10:14.260 Disk stats (read/write): 00:10:14.260 nvme0n1: ios=59349/31228, merge=0/0, ticks=490121/219023, in_queue=709144, util=98.60% 00:10:14.260 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:14.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:14.260 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:14.260 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:10:14.260 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:14.260 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.260 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:14.260 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.260 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:10:14.260 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:14.519 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:14.519 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:14.519 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:14.519 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:14.519 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:14.519 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:14.519 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:14.519 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:14.519 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:14.520 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:14.520 rmmod nvme_tcp 00:10:14.520 rmmod nvme_fabrics 00:10:14.520 rmmod nvme_keyring 00:10:14.520 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:14.520 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:14.520 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:14.520 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 69658 ']' 00:10:14.520 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 69658 00:10:14.520 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 69658 ']' 00:10:14.520 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 69658 00:10:14.520 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:10:14.520 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.520 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69658 00:10:14.520 killing process with pid 69658 00:10:14.520 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.520 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.520 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69658' 00:10:14.520 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 69658 00:10:14.520 09:42:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 69658 00:10:15.897 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:15.897 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:15.897 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:15.897 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:15.897 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:15.897 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:15.897 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:15.897 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:15.897 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:15.897 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:15.897 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:15.898 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:15.898 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:15.898 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:15.898 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:15.898 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:15.898 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:15.898 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:15.898 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:15.898 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:15.898 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:15.898 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:15.898 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:15.898 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.898 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.898 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.157 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:16.157 00:10:16.157 real 0m22.087s 00:10:16.157 user 1m24.271s 00:10:16.157 sys 0m6.012s 00:10:16.157 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.157 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:16.157 ************************************ 00:10:16.157 END TEST nvmf_target_multipath 00:10:16.157 ************************************ 00:10:16.157 09:42:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:16.157 09:42:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:16.157 09:42:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.157 09:42:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:16.157 ************************************ 00:10:16.157 START TEST nvmf_zcopy 00:10:16.157 ************************************ 00:10:16.157 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:16.157 * Looking for test storage... 00:10:16.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:16.157 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:16.157 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:10:16.157 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:16.157 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:16.157 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.157 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.157 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:16.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.158 --rc genhtml_branch_coverage=1 00:10:16.158 --rc genhtml_function_coverage=1 00:10:16.158 --rc genhtml_legend=1 00:10:16.158 --rc geninfo_all_blocks=1 00:10:16.158 --rc geninfo_unexecuted_blocks=1 00:10:16.158 00:10:16.158 ' 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:16.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.158 --rc genhtml_branch_coverage=1 00:10:16.158 --rc genhtml_function_coverage=1 00:10:16.158 --rc genhtml_legend=1 00:10:16.158 --rc geninfo_all_blocks=1 00:10:16.158 --rc geninfo_unexecuted_blocks=1 00:10:16.158 00:10:16.158 ' 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:16.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.158 --rc genhtml_branch_coverage=1 00:10:16.158 --rc genhtml_function_coverage=1 00:10:16.158 --rc genhtml_legend=1 00:10:16.158 --rc geninfo_all_blocks=1 00:10:16.158 --rc geninfo_unexecuted_blocks=1 00:10:16.158 00:10:16.158 ' 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:16.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.158 --rc genhtml_branch_coverage=1 00:10:16.158 --rc genhtml_function_coverage=1 00:10:16.158 --rc genhtml_legend=1 00:10:16.158 --rc geninfo_all_blocks=1 00:10:16.158 --rc geninfo_unexecuted_blocks=1 00:10:16.158 00:10:16.158 ' 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.158 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:16.418 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:16.418 Cannot find device "nvmf_init_br" 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:16.418 Cannot find device "nvmf_init_br2" 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:16.418 Cannot find device "nvmf_tgt_br" 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:16.418 Cannot find device "nvmf_tgt_br2" 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:16.418 Cannot find device "nvmf_init_br" 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:16.418 Cannot find device "nvmf_init_br2" 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:16.418 Cannot find device "nvmf_tgt_br" 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:16.418 Cannot find device "nvmf_tgt_br2" 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:16.418 Cannot find device "nvmf_br" 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:16.418 Cannot find device "nvmf_init_if" 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:16.418 Cannot find device "nvmf_init_if2" 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:16.418 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:16.418 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:16.418 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:16.419 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:16.419 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:16.419 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:16.419 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:16.679 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:16.679 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:10:16.679 00:10:16.679 --- 10.0.0.3 ping statistics --- 00:10:16.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.679 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:16.679 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:16.679 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:10:16.679 00:10:16.679 --- 10.0.0.4 ping statistics --- 00:10:16.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.679 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:16.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:10:16.679 00:10:16.679 --- 10.0.0.1 ping statistics --- 00:10:16.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.679 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:16.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:10:16.679 00:10:16.679 --- 10.0.0.2 ping statistics --- 00:10:16.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.679 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=70326 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 70326 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 70326 ']' 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.679 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.937 [2024-12-05 09:42:04.370334] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:10:16.937 [2024-12-05 09:42:04.370520] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.937 [2024-12-05 09:42:04.553969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.195 [2024-12-05 09:42:04.654190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.195 [2024-12-05 09:42:04.654279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.195 [2024-12-05 09:42:04.654313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.195 [2024-12-05 09:42:04.654339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.195 [2024-12-05 09:42:04.654352] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.195 [2024-12-05 09:42:04.655589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.761 [2024-12-05 09:42:05.318363] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.761 [2024-12-05 09:42:05.338424] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.761 malloc0 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.761 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.019 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.019 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:18.019 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:18.019 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:18.019 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:18.019 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:18.019 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:18.019 { 00:10:18.019 "params": { 00:10:18.019 "name": "Nvme$subsystem", 00:10:18.019 "trtype": "$TEST_TRANSPORT", 00:10:18.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.019 "adrfam": "ipv4", 00:10:18.019 "trsvcid": "$NVMF_PORT", 00:10:18.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.019 "hdgst": ${hdgst:-false}, 00:10:18.019 "ddgst": ${ddgst:-false} 00:10:18.019 }, 00:10:18.019 "method": "bdev_nvme_attach_controller" 00:10:18.019 } 00:10:18.019 EOF 00:10:18.019 )") 00:10:18.019 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:18.019 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:18.019 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:18.019 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:18.019 "params": { 00:10:18.019 "name": "Nvme1", 00:10:18.019 "trtype": "tcp", 00:10:18.019 "traddr": "10.0.0.3", 00:10:18.019 "adrfam": "ipv4", 00:10:18.019 "trsvcid": "4420", 00:10:18.019 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:18.019 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:18.019 "hdgst": false, 00:10:18.019 "ddgst": false 00:10:18.019 }, 00:10:18.019 "method": "bdev_nvme_attach_controller" 00:10:18.019 }' 00:10:18.019 [2024-12-05 09:42:05.511919] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:10:18.019 [2024-12-05 09:42:05.512107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70377 ] 00:10:18.277 [2024-12-05 09:42:05.691007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.277 [2024-12-05 09:42:05.798856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.845 Running I/O for 10 seconds... 00:10:20.717 5362.00 IOPS, 41.89 MiB/s [2024-12-05T09:42:09.283Z] 5586.50 IOPS, 43.64 MiB/s [2024-12-05T09:42:10.222Z] 5665.33 IOPS, 44.26 MiB/s [2024-12-05T09:42:11.600Z] 5697.75 IOPS, 44.51 MiB/s [2024-12-05T09:42:12.538Z] 5720.80 IOPS, 44.69 MiB/s [2024-12-05T09:42:13.473Z] 5735.50 IOPS, 44.81 MiB/s [2024-12-05T09:42:14.409Z] 5754.71 IOPS, 44.96 MiB/s [2024-12-05T09:42:15.344Z] 5769.38 IOPS, 45.07 MiB/s [2024-12-05T09:42:16.278Z] 5774.78 IOPS, 45.12 MiB/s [2024-12-05T09:42:16.278Z] 5783.70 IOPS, 45.19 MiB/s 00:10:28.649 Latency(us) 00:10:28.649 [2024-12-05T09:42:16.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.649 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:28.649 Verification LBA range: start 0x0 length 0x1000 00:10:28.649 Nvme1n1 : 10.01 5784.21 45.19 0.00 0.00 22071.87 834.09 40274.85 00:10:28.649 [2024-12-05T09:42:16.278Z] =================================================================================================================== 00:10:28.649 [2024-12-05T09:42:16.278Z] Total : 5784.21 45.19 0.00 0.00 22071.87 834.09 40274.85 00:10:29.588 09:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=70507 00:10:29.588 09:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:29.588 09:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.588 09:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:29.588 09:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:29.589 09:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:29.589 09:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:29.589 09:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:29.589 09:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:29.589 { 00:10:29.589 "params": { 00:10:29.589 "name": "Nvme$subsystem", 00:10:29.589 "trtype": "$TEST_TRANSPORT", 00:10:29.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:29.589 "adrfam": "ipv4", 00:10:29.589 "trsvcid": "$NVMF_PORT", 00:10:29.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:29.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:29.589 "hdgst": ${hdgst:-false}, 00:10:29.589 "ddgst": ${ddgst:-false} 00:10:29.589 }, 00:10:29.589 "method": "bdev_nvme_attach_controller" 00:10:29.589 } 00:10:29.589 EOF 00:10:29.589 )") 00:10:29.589 09:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:29.589 09:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:29.589 [2024-12-05 09:42:17.146185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.589 [2024-12-05 09:42:17.146233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.589 09:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:29.589 09:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:29.589 "params": { 00:10:29.589 "name": "Nvme1", 00:10:29.589 "trtype": "tcp", 00:10:29.589 "traddr": "10.0.0.3", 00:10:29.589 "adrfam": "ipv4", 00:10:29.589 "trsvcid": "4420", 00:10:29.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:29.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:29.589 "hdgst": false, 00:10:29.589 "ddgst": false 00:10:29.589 }, 00:10:29.589 "method": "bdev_nvme_attach_controller" 00:10:29.589 }' 00:10:29.589 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.589 [2024-12-05 09:42:17.158104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.589 [2024-12-05 09:42:17.158160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.589 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.589 [2024-12-05 09:42:17.170117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.589 [2024-12-05 09:42:17.170173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.589 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.589 [2024-12-05 09:42:17.182098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.589 [2024-12-05 09:42:17.182150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.589 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.589 [2024-12-05 09:42:17.194150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.589 [2024-12-05 09:42:17.194210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.589 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.589 [2024-12-05 09:42:17.206132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.589 [2024-12-05 09:42:17.206191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.589 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.850 [2024-12-05 09:42:17.218105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.850 [2024-12-05 09:42:17.218159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.850 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.850 [2024-12-05 09:42:17.230127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.850 [2024-12-05 09:42:17.230180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.850 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.850 [2024-12-05 09:42:17.242137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.850 [2024-12-05 09:42:17.242188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.850 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.850 [2024-12-05 09:42:17.254111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.850 [2024-12-05 09:42:17.254162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.850 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.850 [2024-12-05 09:42:17.266132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.850 [2024-12-05 09:42:17.266184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.850 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.850 [2024-12-05 09:42:17.271199] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:10:29.850 [2024-12-05 09:42:17.271486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70507 ] 00:10:29.850 [2024-12-05 09:42:17.278118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.850 [2024-12-05 09:42:17.278168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.850 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.850 [2024-12-05 09:42:17.290144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.850 [2024-12-05 09:42:17.290180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.850 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.850 [2024-12-05 09:42:17.302156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.850 [2024-12-05 09:42:17.302208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.850 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.850 [2024-12-05 09:42:17.314141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.850 [2024-12-05 09:42:17.314195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.850 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.850 [2024-12-05 09:42:17.326168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.850 [2024-12-05 09:42:17.326222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.850 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.850 [2024-12-05 09:42:17.338204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.850 [2024-12-05 09:42:17.338259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.850 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.850 [2024-12-05 09:42:17.346162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.850 [2024-12-05 09:42:17.346214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.850 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.850 [2024-12-05 09:42:17.354183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.850 [2024-12-05 09:42:17.354235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.850 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.850 [2024-12-05 09:42:17.366193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.850 [2024-12-05 09:42:17.366254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.850 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.850 [2024-12-05 09:42:17.378231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.850 [2024-12-05 09:42:17.378282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.850 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.850 [2024-12-05 09:42:17.390202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.850 [2024-12-05 09:42:17.390245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.850 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.850 [2024-12-05 09:42:17.402193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.850 [2024-12-05 09:42:17.402253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.850 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.850 [2024-12-05 09:42:17.414211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.851 [2024-12-05 09:42:17.414263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.851 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.851 [2024-12-05 09:42:17.426214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.851 [2024-12-05 09:42:17.426265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.851 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.851 [2024-12-05 09:42:17.438205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.851 [2024-12-05 09:42:17.438257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.851 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.851 [2024-12-05 09:42:17.445158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.851 [2024-12-05 09:42:17.450223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.851 [2024-12-05 09:42:17.450275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.851 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.851 [2024-12-05 09:42:17.462215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.851 [2024-12-05 09:42:17.462254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.851 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:29.851 [2024-12-05 09:42:17.474246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.851 [2024-12-05 09:42:17.474282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.111 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.111 [2024-12-05 09:42:17.486275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.111 [2024-12-05 09:42:17.486329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.111 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.111 [2024-12-05 09:42:17.498238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.111 [2024-12-05 09:42:17.498303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.111 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.111 [2024-12-05 09:42:17.510275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.111 [2024-12-05 09:42:17.510326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.111 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.111 [2024-12-05 09:42:17.522293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.111 [2024-12-05 09:42:17.522328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.111 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.111 [2024-12-05 09:42:17.534244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.111 [2024-12-05 09:42:17.534294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.111 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.111 [2024-12-05 09:42:17.546280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.111 [2024-12-05 09:42:17.546331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.111 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.111 [2024-12-05 09:42:17.557218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.111 [2024-12-05 09:42:17.558287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.111 [2024-12-05 09:42:17.558321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.111 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.111 [2024-12-05 09:42:17.570267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.111 [2024-12-05 09:42:17.570302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.111 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.111 [2024-12-05 09:42:17.582279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.111 [2024-12-05 09:42:17.582314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.111 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.111 [2024-12-05 09:42:17.594268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.111 [2024-12-05 09:42:17.594317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.111 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.111 [2024-12-05 09:42:17.606279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.111 [2024-12-05 09:42:17.606330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.111 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.111 [2024-12-05 09:42:17.618285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.111 [2024-12-05 09:42:17.618321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.111 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.111 [2024-12-05 09:42:17.630269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.111 [2024-12-05 09:42:17.630303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.111 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.111 [2024-12-05 09:42:17.642292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.111 [2024-12-05 09:42:17.642327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.111 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.112 [2024-12-05 09:42:17.654277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.112 [2024-12-05 09:42:17.654310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.112 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.112 [2024-12-05 09:42:17.666307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.112 [2024-12-05 09:42:17.666359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.112 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.112 [2024-12-05 09:42:17.678311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.112 [2024-12-05 09:42:17.678346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.112 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.112 [2024-12-05 09:42:17.690287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.112 [2024-12-05 09:42:17.690319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.112 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.112 [2024-12-05 09:42:17.702313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.112 [2024-12-05 09:42:17.702364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.112 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.112 [2024-12-05 09:42:17.714312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.112 [2024-12-05 09:42:17.714347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.112 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.112 [2024-12-05 09:42:17.726303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.112 [2024-12-05 09:42:17.726345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.112 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.372 [2024-12-05 09:42:17.738392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.372 [2024-12-05 09:42:17.738452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.372 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.372 [2024-12-05 09:42:17.750335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.372 [2024-12-05 09:42:17.750375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.372 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.372 [2024-12-05 09:42:17.762336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.372 [2024-12-05 09:42:17.762372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.372 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.372 [2024-12-05 09:42:17.774346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.372 [2024-12-05 09:42:17.774383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.372 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.372 [2024-12-05 09:42:17.786321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.372 [2024-12-05 09:42:17.786373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.372 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.372 [2024-12-05 09:42:17.798338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.372 [2024-12-05 09:42:17.798383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.372 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.372 [2024-12-05 09:42:17.810360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.372 [2024-12-05 09:42:17.810397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.372 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.372 [2024-12-05 09:42:17.822332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.372 [2024-12-05 09:42:17.822367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.372 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.372 [2024-12-05 09:42:17.834351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.372 [2024-12-05 09:42:17.834387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.372 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.373 [2024-12-05 09:42:17.846345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.373 [2024-12-05 09:42:17.846388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.373 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.373 [2024-12-05 09:42:17.858364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.373 [2024-12-05 09:42:17.858400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.373 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.373 [2024-12-05 09:42:17.870415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.373 [2024-12-05 09:42:17.870466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.373 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.373 [2024-12-05 09:42:17.882418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.373 [2024-12-05 09:42:17.882455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.373 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.373 [2024-12-05 09:42:17.894482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.373 [2024-12-05 09:42:17.894519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.373 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.373 [2024-12-05 09:42:17.906408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.373 [2024-12-05 09:42:17.906445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.373 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.373 [2024-12-05 09:42:17.918397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.373 [2024-12-05 09:42:17.918433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.373 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.373 [2024-12-05 09:42:17.930419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.373 [2024-12-05 09:42:17.930456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.373 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.373 [2024-12-05 09:42:17.942488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.373 [2024-12-05 09:42:17.942525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.373 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.373 [2024-12-05 09:42:17.954525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.373 [2024-12-05 09:42:17.954563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.373 Running I/O for 5 seconds... 00:10:30.373 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.373 [2024-12-05 09:42:17.974708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.373 [2024-12-05 09:42:17.974750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.373 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.373 [2024-12-05 09:42:17.990191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.373 [2024-12-05 09:42:17.990231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.373 2024/12/05 09:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.632 [2024-12-05 09:42:18.006227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.632 [2024-12-05 09:42:18.006268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.632 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.632 [2024-12-05 09:42:18.022129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.632 [2024-12-05 09:42:18.022168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.632 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.632 [2024-12-05 09:42:18.032368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.632 [2024-12-05 09:42:18.032409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.632 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.632 [2024-12-05 09:42:18.046925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.632 [2024-12-05 09:42:18.046977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.633 [2024-12-05 09:42:18.062943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-05 09:42:18.062998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.633 [2024-12-05 09:42:18.077768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-05 09:42:18.077837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.633 [2024-12-05 09:42:18.094509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-05 09:42:18.094549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.633 [2024-12-05 09:42:18.110517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-05 09:42:18.110557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.633 [2024-12-05 09:42:18.121353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-05 09:42:18.121393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.633 [2024-12-05 09:42:18.137198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-05 09:42:18.137254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.633 [2024-12-05 09:42:18.152482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-05 09:42:18.152522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.633 [2024-12-05 09:42:18.163059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-05 09:42:18.163098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.633 [2024-12-05 09:42:18.177229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-05 09:42:18.177269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.633 [2024-12-05 09:42:18.193905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-05 09:42:18.193955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.633 [2024-12-05 09:42:18.210704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-05 09:42:18.210745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.633 [2024-12-05 09:42:18.226541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-05 09:42:18.226581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.633 [2024-12-05 09:42:18.243247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-05 09:42:18.243319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.633 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.633 [2024-12-05 09:42:18.258863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.633 [2024-12-05 09:42:18.258903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.893 [2024-12-05 09:42:18.274799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-05 09:42:18.274852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.893 [2024-12-05 09:42:18.291041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-05 09:42:18.291097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.893 [2024-12-05 09:42:18.302139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-05 09:42:18.302178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.893 [2024-12-05 09:42:18.317406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-05 09:42:18.317464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.893 [2024-12-05 09:42:18.332297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-05 09:42:18.332338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.893 [2024-12-05 09:42:18.348899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-05 09:42:18.348997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.893 [2024-12-05 09:42:18.364610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-05 09:42:18.364660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.893 [2024-12-05 09:42:18.381434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-05 09:42:18.381475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.893 [2024-12-05 09:42:18.397895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-05 09:42:18.397963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.893 [2024-12-05 09:42:18.416730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-05 09:42:18.416771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.893 [2024-12-05 09:42:18.432068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-05 09:42:18.432107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.893 [2024-12-05 09:42:18.449597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-05 09:42:18.449654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.893 [2024-12-05 09:42:18.464652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-05 09:42:18.464693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.893 [2024-12-05 09:42:18.481142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-05 09:42:18.481182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.893 [2024-12-05 09:42:18.496278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-05 09:42:18.496347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:30.893 [2024-12-05 09:42:18.506399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.893 [2024-12-05 09:42:18.506439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.893 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.153 [2024-12-05 09:42:18.521623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.153 [2024-12-05 09:42:18.521681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.153 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.153 [2024-12-05 09:42:18.537844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.153 [2024-12-05 09:42:18.537884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.153 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.153 [2024-12-05 09:42:18.553063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.153 [2024-12-05 09:42:18.553104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.153 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.153 [2024-12-05 09:42:18.568432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.153 [2024-12-05 09:42:18.568473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.153 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.153 [2024-12-05 09:42:18.579259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.153 [2024-12-05 09:42:18.579299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.153 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.153 [2024-12-05 09:42:18.595560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.153 [2024-12-05 09:42:18.595618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.153 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.153 [2024-12-05 09:42:18.611782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.153 [2024-12-05 09:42:18.611822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.153 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.153 [2024-12-05 09:42:18.628189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.153 [2024-12-05 09:42:18.628231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.153 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.153 [2024-12-05 09:42:18.645170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.153 [2024-12-05 09:42:18.645211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.153 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.153 [2024-12-05 09:42:18.660455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.153 [2024-12-05 09:42:18.660496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.153 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.153 [2024-12-05 09:42:18.677612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.153 [2024-12-05 09:42:18.677658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.153 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.153 [2024-12-05 09:42:18.692820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.153 [2024-12-05 09:42:18.692861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.153 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.153 [2024-12-05 09:42:18.708573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.153 [2024-12-05 09:42:18.708613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.153 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.153 [2024-12-05 09:42:18.719851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.153 [2024-12-05 09:42:18.719892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.153 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.153 [2024-12-05 09:42:18.734926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.153 [2024-12-05 09:42:18.734977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.153 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.153 [2024-12-05 09:42:18.751415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.153 [2024-12-05 09:42:18.751456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.153 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.153 [2024-12-05 09:42:18.767007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.153 [2024-12-05 09:42:18.767047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.153 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.153 [2024-12-05 09:42:18.777045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.153 [2024-12-05 09:42:18.777083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.153 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.413 [2024-12-05 09:42:18.791118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.413 [2024-12-05 09:42:18.791173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.413 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.413 [2024-12-05 09:42:18.805113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.413 [2024-12-05 09:42:18.805169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.413 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.413 [2024-12-05 09:42:18.820413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.413 [2024-12-05 09:42:18.820453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.413 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.413 [2024-12-05 09:42:18.847292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.413 [2024-12-05 09:42:18.847366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.413 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.413 [2024-12-05 09:42:18.862851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.413 [2024-12-05 09:42:18.862893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.413 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.413 [2024-12-05 09:42:18.878099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.413 [2024-12-05 09:42:18.878137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.413 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.413 [2024-12-05 09:42:18.888314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.413 [2024-12-05 09:42:18.888353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.413 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.413 [2024-12-05 09:42:18.903100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.413 [2024-12-05 09:42:18.903140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.413 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.413 [2024-12-05 09:42:18.919336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.413 [2024-12-05 09:42:18.919376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.413 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.413 [2024-12-05 09:42:18.937314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.413 [2024-12-05 09:42:18.937370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.413 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.413 [2024-12-05 09:42:18.952194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.413 [2024-12-05 09:42:18.952246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.413 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.413 10774.00 IOPS, 84.17 MiB/s [2024-12-05T09:42:19.042Z] [2024-12-05 09:42:18.968015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.413 [2024-12-05 09:42:18.968092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.413 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.413 [2024-12-05 09:42:18.979331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.413 [2024-12-05 09:42:18.979378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.413 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.413 [2024-12-05 09:42:18.994279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.413 [2024-12-05 09:42:18.994331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.413 2024/12/05 09:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.413 [2024-12-05 09:42:19.011174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.413 [2024-12-05 09:42:19.011230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.413 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.413 [2024-12-05 09:42:19.026992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.413 [2024-12-05 09:42:19.027047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.413 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.673 [2024-12-05 09:42:19.044712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.673 [2024-12-05 09:42:19.044752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.673 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.673 [2024-12-05 09:42:19.060592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.673 [2024-12-05 09:42:19.060633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.673 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.673 [2024-12-05 09:42:19.077658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.673 [2024-12-05 09:42:19.077715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.673 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.673 [2024-12-05 09:42:19.092826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.673 [2024-12-05 09:42:19.092866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.673 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.673 [2024-12-05 09:42:19.109295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.673 [2024-12-05 09:42:19.109336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.673 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.673 [2024-12-05 09:42:19.125909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.673 [2024-12-05 09:42:19.125961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.673 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.673 [2024-12-05 09:42:19.142420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.673 [2024-12-05 09:42:19.142461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.673 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.673 [2024-12-05 09:42:19.159626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.673 [2024-12-05 09:42:19.159666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.673 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.673 [2024-12-05 09:42:19.177413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.673 [2024-12-05 09:42:19.177453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.673 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.673 [2024-12-05 09:42:19.192562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.673 [2024-12-05 09:42:19.192604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.673 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.674 [2024-12-05 09:42:19.203859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.674 [2024-12-05 09:42:19.203898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.674 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.674 [2024-12-05 09:42:19.215544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.674 [2024-12-05 09:42:19.215585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.674 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.674 [2024-12-05 09:42:19.230986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.674 [2024-12-05 09:42:19.231027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.674 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.674 [2024-12-05 09:42:19.247287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.674 [2024-12-05 09:42:19.247327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.674 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.674 [2024-12-05 09:42:19.257616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.674 [2024-12-05 09:42:19.257670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.674 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.674 [2024-12-05 09:42:19.272744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.674 [2024-12-05 09:42:19.272784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.674 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.674 [2024-12-05 09:42:19.288168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.674 [2024-12-05 09:42:19.288224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.674 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.933 [2024-12-05 09:42:19.304716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.933 [2024-12-05 09:42:19.304757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.933 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.933 [2024-12-05 09:42:19.322616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.933 [2024-12-05 09:42:19.322669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.933 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.933 [2024-12-05 09:42:19.342518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.933 [2024-12-05 09:42:19.342563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.933 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.933 [2024-12-05 09:42:19.358290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.933 [2024-12-05 09:42:19.358342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.933 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.933 [2024-12-05 09:42:19.375361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.933 [2024-12-05 09:42:19.375419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.934 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.934 [2024-12-05 09:42:19.391680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.934 [2024-12-05 09:42:19.391722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.934 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.934 [2024-12-05 09:42:19.403757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.934 [2024-12-05 09:42:19.403799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.934 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.934 [2024-12-05 09:42:19.419860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.934 [2024-12-05 09:42:19.419901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.934 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.934 [2024-12-05 09:42:19.434821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.934 [2024-12-05 09:42:19.434861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.934 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.934 [2024-12-05 09:42:19.445359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.934 [2024-12-05 09:42:19.445396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.934 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.934 [2024-12-05 09:42:19.460554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.934 [2024-12-05 09:42:19.460592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.934 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.934 [2024-12-05 09:42:19.475984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.934 [2024-12-05 09:42:19.476025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.934 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.934 [2024-12-05 09:42:19.491079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.934 [2024-12-05 09:42:19.491119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.934 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.934 [2024-12-05 09:42:19.506051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.934 [2024-12-05 09:42:19.506091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.934 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.934 [2024-12-05 09:42:19.522121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.934 [2024-12-05 09:42:19.522160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.934 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.934 [2024-12-05 09:42:19.539327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.934 [2024-12-05 09:42:19.539368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.934 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:31.934 [2024-12-05 09:42:19.555723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.934 [2024-12-05 09:42:19.555763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.934 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.239 [2024-12-05 09:42:19.571450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.239 [2024-12-05 09:42:19.571491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.239 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.239 [2024-12-05 09:42:19.582617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.239 [2024-12-05 09:42:19.582657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.239 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.239 [2024-12-05 09:42:19.597438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.239 [2024-12-05 09:42:19.597522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.239 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.240 [2024-12-05 09:42:19.613708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.240 [2024-12-05 09:42:19.613765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.240 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.240 [2024-12-05 09:42:19.628695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.240 [2024-12-05 09:42:19.628735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.240 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.240 [2024-12-05 09:42:19.644639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.240 [2024-12-05 09:42:19.644680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.240 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.240 [2024-12-05 09:42:19.661028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.240 [2024-12-05 09:42:19.661069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.240 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.240 [2024-12-05 09:42:19.679492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.240 [2024-12-05 09:42:19.679532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.240 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.240 [2024-12-05 09:42:19.695171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.240 [2024-12-05 09:42:19.695212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.240 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.240 [2024-12-05 09:42:19.711546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.240 [2024-12-05 09:42:19.711587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.240 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.240 [2024-12-05 09:42:19.728831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.240 [2024-12-05 09:42:19.728871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.240 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.240 [2024-12-05 09:42:19.743921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.240 [2024-12-05 09:42:19.743972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.240 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.240 [2024-12-05 09:42:19.759880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.240 [2024-12-05 09:42:19.759920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.240 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.240 [2024-12-05 09:42:19.778197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.240 [2024-12-05 09:42:19.778237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.240 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.240 [2024-12-05 09:42:19.794050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.240 [2024-12-05 09:42:19.794089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.240 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.240 [2024-12-05 09:42:19.810447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.240 [2024-12-05 09:42:19.810487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.240 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.240 [2024-12-05 09:42:19.827832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.240 [2024-12-05 09:42:19.827874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.240 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.524 [2024-12-05 09:42:19.843781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.524 [2024-12-05 09:42:19.843822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.524 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.524 [2024-12-05 09:42:19.857405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.524 [2024-12-05 09:42:19.857446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.524 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.524 [2024-12-05 09:42:19.873113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.524 [2024-12-05 09:42:19.873152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.524 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.524 [2024-12-05 09:42:19.889120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.524 [2024-12-05 09:42:19.889162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.524 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.524 [2024-12-05 09:42:19.904649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.525 [2024-12-05 09:42:19.904688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.525 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.525 [2024-12-05 09:42:19.920091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.525 [2024-12-05 09:42:19.920132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.525 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.525 [2024-12-05 09:42:19.930382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.525 [2024-12-05 09:42:19.930422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.525 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.525 [2024-12-05 09:42:19.945065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.525 [2024-12-05 09:42:19.945105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.525 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.525 10795.00 IOPS, 84.34 MiB/s [2024-12-05T09:42:20.154Z] [2024-12-05 09:42:19.961429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.525 [2024-12-05 09:42:19.961467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.525 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.525 [2024-12-05 09:42:19.979799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.525 [2024-12-05 09:42:19.979840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.525 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.525 [2024-12-05 09:42:19.995441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.525 [2024-12-05 09:42:19.995481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.525 2024/12/05 09:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.525 [2024-12-05 09:42:20.012104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.525 [2024-12-05 09:42:20.012163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.525 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.525 [2024-12-05 09:42:20.030410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.525 [2024-12-05 09:42:20.030468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.525 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.525 [2024-12-05 09:42:20.045972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.525 [2024-12-05 09:42:20.046061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.525 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.525 [2024-12-05 09:42:20.057645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.525 [2024-12-05 09:42:20.057688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.525 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.525 [2024-12-05 09:42:20.075430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.525 [2024-12-05 09:42:20.075501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.525 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.525 [2024-12-05 09:42:20.092354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.525 [2024-12-05 09:42:20.092394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.525 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.525 [2024-12-05 09:42:20.109598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.525 [2024-12-05 09:42:20.109656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.525 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.525 [2024-12-05 09:42:20.126961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.525 [2024-12-05 09:42:20.126998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.525 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.525 [2024-12-05 09:42:20.143185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.525 [2024-12-05 09:42:20.143241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.525 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.784 [2024-12-05 09:42:20.158558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.784 [2024-12-05 09:42:20.158598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.784 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.784 [2024-12-05 09:42:20.175408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.784 [2024-12-05 09:42:20.175449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.784 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.784 [2024-12-05 09:42:20.190481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.785 [2024-12-05 09:42:20.190521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.785 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.785 [2024-12-05 09:42:20.206429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.785 [2024-12-05 09:42:20.206469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.785 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.785 [2024-12-05 09:42:20.224259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.785 [2024-12-05 09:42:20.224299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.785 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.785 [2024-12-05 09:42:20.239201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.785 [2024-12-05 09:42:20.239241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.785 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.785 [2024-12-05 09:42:20.249808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.785 [2024-12-05 09:42:20.249848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.785 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.785 [2024-12-05 09:42:20.265153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.785 [2024-12-05 09:42:20.265209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.785 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.785 [2024-12-05 09:42:20.280608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.785 [2024-12-05 09:42:20.280650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.785 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.785 [2024-12-05 09:42:20.297800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.785 [2024-12-05 09:42:20.297840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.785 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.785 [2024-12-05 09:42:20.315292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.785 [2024-12-05 09:42:20.315332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.785 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.785 [2024-12-05 09:42:20.330864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.785 [2024-12-05 09:42:20.330905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.785 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.785 [2024-12-05 09:42:20.347109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.785 [2024-12-05 09:42:20.347149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.785 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.785 [2024-12-05 09:42:20.364360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.785 [2024-12-05 09:42:20.364411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.785 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.785 [2024-12-05 09:42:20.379974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.785 [2024-12-05 09:42:20.380040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.785 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:32.785 [2024-12-05 09:42:20.397781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.785 [2024-12-05 09:42:20.397870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.785 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.044 [2024-12-05 09:42:20.414824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.044 [2024-12-05 09:42:20.414866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.044 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.044 [2024-12-05 09:42:20.429450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.044 [2024-12-05 09:42:20.429490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.044 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.044 [2024-12-05 09:42:20.445557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.044 [2024-12-05 09:42:20.445614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.044 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.044 [2024-12-05 09:42:20.462229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.044 [2024-12-05 09:42:20.462284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.044 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.044 [2024-12-05 09:42:20.478824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.044 [2024-12-05 09:42:20.478864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.044 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.044 [2024-12-05 09:42:20.495677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.044 [2024-12-05 09:42:20.495717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.044 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.044 [2024-12-05 09:42:20.511627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.044 [2024-12-05 09:42:20.511667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.044 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.044 [2024-12-05 09:42:20.528505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.044 [2024-12-05 09:42:20.528545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.044 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.044 [2024-12-05 09:42:20.544315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.044 [2024-12-05 09:42:20.544355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.044 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.044 [2024-12-05 09:42:20.554837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.044 [2024-12-05 09:42:20.554877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.044 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.044 [2024-12-05 09:42:20.570407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.044 [2024-12-05 09:42:20.570447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.044 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.044 [2024-12-05 09:42:20.586248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.044 [2024-12-05 09:42:20.586288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.045 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.045 [2024-12-05 09:42:20.602823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.045 [2024-12-05 09:42:20.602864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.045 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.045 [2024-12-05 09:42:20.621225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.045 [2024-12-05 09:42:20.621265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.045 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.045 [2024-12-05 09:42:20.636098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.045 [2024-12-05 09:42:20.636137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.045 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.045 [2024-12-05 09:42:20.651996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.045 [2024-12-05 09:42:20.652036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.045 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.045 [2024-12-05 09:42:20.661973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.045 [2024-12-05 09:42:20.662012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.045 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.304 [2024-12-05 09:42:20.676632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.304 [2024-12-05 09:42:20.676671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.304 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.304 [2024-12-05 09:42:20.694186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.304 [2024-12-05 09:42:20.694227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.304 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.304 [2024-12-05 09:42:20.709326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.304 [2024-12-05 09:42:20.709365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.304 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.304 [2024-12-05 09:42:20.724854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.304 [2024-12-05 09:42:20.724908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.304 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.304 [2024-12-05 09:42:20.735104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.304 [2024-12-05 09:42:20.735144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.304 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.304 [2024-12-05 09:42:20.749346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.304 [2024-12-05 09:42:20.749385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.304 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.304 [2024-12-05 09:42:20.765621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.304 [2024-12-05 09:42:20.765677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.304 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.304 [2024-12-05 09:42:20.782195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.304 [2024-12-05 09:42:20.782235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.304 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.304 [2024-12-05 09:42:20.799769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.304 [2024-12-05 09:42:20.799810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.304 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.304 [2024-12-05 09:42:20.815849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.304 [2024-12-05 09:42:20.815889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.304 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.304 [2024-12-05 09:42:20.832103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.304 [2024-12-05 09:42:20.832143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.304 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.304 [2024-12-05 09:42:20.849616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.304 [2024-12-05 09:42:20.849674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.304 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.304 [2024-12-05 09:42:20.865159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.304 [2024-12-05 09:42:20.865198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.304 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.304 [2024-12-05 09:42:20.881330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.304 [2024-12-05 09:42:20.881370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.304 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.304 [2024-12-05 09:42:20.898882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.304 [2024-12-05 09:42:20.898945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.304 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.304 [2024-12-05 09:42:20.914955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.304 [2024-12-05 09:42:20.915010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.304 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.564 [2024-12-05 09:42:20.931920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.564 [2024-12-05 09:42:20.931986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.564 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.564 [2024-12-05 09:42:20.948129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.564 [2024-12-05 09:42:20.948169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.564 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.564 10812.67 IOPS, 84.47 MiB/s [2024-12-05T09:42:21.193Z] [2024-12-05 09:42:20.965570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.564 [2024-12-05 09:42:20.965626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.564 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.564 [2024-12-05 09:42:20.982108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.564 [2024-12-05 09:42:20.982148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.564 2024/12/05 09:42:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.564 [2024-12-05 09:42:20.997512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.564 [2024-12-05 09:42:20.997567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.564 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.564 [2024-12-05 09:42:21.013766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.564 [2024-12-05 09:42:21.013839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.564 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.564 [2024-12-05 09:42:21.030362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.564 [2024-12-05 09:42:21.030403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.564 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.564 [2024-12-05 09:42:21.047088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.564 [2024-12-05 09:42:21.047128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.564 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.564 [2024-12-05 09:42:21.063810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.564 [2024-12-05 09:42:21.063851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.564 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.564 [2024-12-05 09:42:21.078854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.564 [2024-12-05 09:42:21.078894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.564 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.564 [2024-12-05 09:42:21.094885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.564 [2024-12-05 09:42:21.094926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.564 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.564 [2024-12-05 09:42:21.110642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.564 [2024-12-05 09:42:21.110683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.564 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.564 [2024-12-05 09:42:21.127186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.564 [2024-12-05 09:42:21.127242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.564 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.564 [2024-12-05 09:42:21.144564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.564 [2024-12-05 09:42:21.144605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.564 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.564 [2024-12-05 09:42:21.161004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.564 [2024-12-05 09:42:21.161060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.564 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.564 [2024-12-05 09:42:21.176840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.564 [2024-12-05 09:42:21.176877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.564 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.564 [2024-12-05 09:42:21.187483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.564 [2024-12-05 09:42:21.187542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.564 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.824 [2024-12-05 09:42:21.202954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.824 [2024-12-05 09:42:21.202992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.824 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.824 [2024-12-05 09:42:21.218577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.824 [2024-12-05 09:42:21.218648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.824 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.824 [2024-12-05 09:42:21.235074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.824 [2024-12-05 09:42:21.235130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.824 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.824 [2024-12-05 09:42:21.251434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.824 [2024-12-05 09:42:21.251473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.824 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.824 [2024-12-05 09:42:21.269885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.824 [2024-12-05 09:42:21.269925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.824 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.824 [2024-12-05 09:42:21.284833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.824 [2024-12-05 09:42:21.284872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.824 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.824 [2024-12-05 09:42:21.303084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.824 [2024-12-05 09:42:21.303123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.824 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.824 [2024-12-05 09:42:21.318066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.824 [2024-12-05 09:42:21.318103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.824 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.824 [2024-12-05 09:42:21.333799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.824 [2024-12-05 09:42:21.333840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.824 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.824 [2024-12-05 09:42:21.349130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.824 [2024-12-05 09:42:21.349184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.824 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.824 [2024-12-05 09:42:21.365305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.824 [2024-12-05 09:42:21.365346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.824 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.824 [2024-12-05 09:42:21.382554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.824 [2024-12-05 09:42:21.382606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.824 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.824 [2024-12-05 09:42:21.397738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.824 [2024-12-05 09:42:21.397794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.824 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.824 [2024-12-05 09:42:21.414313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.824 [2024-12-05 09:42:21.414369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.824 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.824 [2024-12-05 09:42:21.431910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.824 [2024-12-05 09:42:21.431992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.824 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:33.824 [2024-12-05 09:42:21.449601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.824 [2024-12-05 09:42:21.449661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.084 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.084 [2024-12-05 09:42:21.465806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.084 [2024-12-05 09:42:21.465862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.084 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.084 [2024-12-05 09:42:21.475922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.084 [2024-12-05 09:42:21.475973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.084 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.084 [2024-12-05 09:42:21.491457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.084 [2024-12-05 09:42:21.491497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.084 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.084 [2024-12-05 09:42:21.507796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.084 [2024-12-05 09:42:21.507837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.084 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.084 [2024-12-05 09:42:21.523170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.084 [2024-12-05 09:42:21.523226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.084 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.084 [2024-12-05 09:42:21.538209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.084 [2024-12-05 09:42:21.538248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.084 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.084 [2024-12-05 09:42:21.553276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.084 [2024-12-05 09:42:21.553346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.084 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.084 [2024-12-05 09:42:21.568606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.084 [2024-12-05 09:42:21.568645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.084 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.085 [2024-12-05 09:42:21.581000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.085 [2024-12-05 09:42:21.581057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.085 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.085 [2024-12-05 09:42:21.593467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.085 [2024-12-05 09:42:21.593531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.085 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.085 [2024-12-05 09:42:21.609955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.085 [2024-12-05 09:42:21.610009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.085 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.085 [2024-12-05 09:42:21.625096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.085 [2024-12-05 09:42:21.625136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.085 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.085 [2024-12-05 09:42:21.641403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.085 [2024-12-05 09:42:21.641443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.085 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.085 [2024-12-05 09:42:21.658812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.085 [2024-12-05 09:42:21.658852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.085 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.085 [2024-12-05 09:42:21.674003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.085 [2024-12-05 09:42:21.674044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.085 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.085 [2024-12-05 09:42:21.690355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.085 [2024-12-05 09:42:21.690395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.085 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.085 [2024-12-05 09:42:21.707141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.085 [2024-12-05 09:42:21.707181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.085 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.344 [2024-12-05 09:42:21.721977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-12-05 09:42:21.722016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.344 [2024-12-05 09:42:21.737168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-12-05 09:42:21.737209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.344 [2024-12-05 09:42:21.753285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-12-05 09:42:21.753327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.344 [2024-12-05 09:42:21.770022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-12-05 09:42:21.770062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.344 [2024-12-05 09:42:21.785111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.345 [2024-12-05 09:42:21.785167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.345 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.345 [2024-12-05 09:42:21.800792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.345 [2024-12-05 09:42:21.800833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.345 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.345 [2024-12-05 09:42:21.811246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.345 [2024-12-05 09:42:21.811287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.345 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.345 [2024-12-05 09:42:21.825741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.345 [2024-12-05 09:42:21.825797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.345 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.345 [2024-12-05 09:42:21.841943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.345 [2024-12-05 09:42:21.841992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.345 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.345 [2024-12-05 09:42:21.852543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.345 [2024-12-05 09:42:21.852583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.345 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.345 [2024-12-05 09:42:21.867819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.345 [2024-12-05 09:42:21.867859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.345 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.345 [2024-12-05 09:42:21.879107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.345 [2024-12-05 09:42:21.879162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.345 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.345 [2024-12-05 09:42:21.894234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.345 [2024-12-05 09:42:21.894275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.345 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.345 [2024-12-05 09:42:21.911552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.345 [2024-12-05 09:42:21.911593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.345 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.345 [2024-12-05 09:42:21.927493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.345 [2024-12-05 09:42:21.927534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.345 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.345 [2024-12-05 09:42:21.944882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.345 [2024-12-05 09:42:21.944923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.345 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.345 10832.25 IOPS, 84.63 MiB/s [2024-12-05T09:42:21.974Z] [2024-12-05 09:42:21.961264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.345 [2024-12-05 09:42:21.961302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.345 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.604 [2024-12-05 09:42:21.976273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.604 [2024-12-05 09:42:21.976343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.604 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.604 [2024-12-05 09:42:21.991430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.604 [2024-12-05 09:42:21.991471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.604 2024/12/05 09:42:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.605 [2024-12-05 09:42:22.008182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.605 [2024-12-05 09:42:22.008238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.605 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.605 [2024-12-05 09:42:22.024340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.605 [2024-12-05 09:42:22.024382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.605 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.605 [2024-12-05 09:42:22.041106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.605 [2024-12-05 09:42:22.041146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.605 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.605 [2024-12-05 09:42:22.057967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.605 [2024-12-05 09:42:22.058006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.605 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.605 [2024-12-05 09:42:22.074210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.605 [2024-12-05 09:42:22.074252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.605 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.605 [2024-12-05 09:42:22.090706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.605 [2024-12-05 09:42:22.090748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.605 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.605 [2024-12-05 09:42:22.107207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.605 [2024-12-05 09:42:22.107247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.605 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.605 [2024-12-05 09:42:22.124666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.605 [2024-12-05 09:42:22.124707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.605 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.605 [2024-12-05 09:42:22.139866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.605 [2024-12-05 09:42:22.139906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.605 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.605 [2024-12-05 09:42:22.155368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.605 [2024-12-05 09:42:22.155408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.605 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.605 [2024-12-05 09:42:22.166501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.605 [2024-12-05 09:42:22.166541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.605 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.605 [2024-12-05 09:42:22.182256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.605 [2024-12-05 09:42:22.182296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.605 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.605 [2024-12-05 09:42:22.198588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.605 [2024-12-05 09:42:22.198629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.605 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.605 [2024-12-05 09:42:22.215185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.605 [2024-12-05 09:42:22.215225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.605 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.605 [2024-12-05 09:42:22.231501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.605 [2024-12-05 09:42:22.231557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.864 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.865 [2024-12-05 09:42:22.248564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.865 [2024-12-05 09:42:22.248603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.865 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.865 [2024-12-05 09:42:22.265044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.865 [2024-12-05 09:42:22.265100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.865 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.865 [2024-12-05 09:42:22.282398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.865 [2024-12-05 09:42:22.282448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.865 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.865 [2024-12-05 09:42:22.299176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.865 [2024-12-05 09:42:22.299230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.865 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.865 [2024-12-05 09:42:22.315827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.865 [2024-12-05 09:42:22.315883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.865 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.865 [2024-12-05 09:42:22.331940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.865 [2024-12-05 09:42:22.331980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.865 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.865 [2024-12-05 09:42:22.348429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.865 [2024-12-05 09:42:22.348470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.865 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.865 [2024-12-05 09:42:22.363250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.865 [2024-12-05 09:42:22.363319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.865 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.865 [2024-12-05 09:42:22.377972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.865 [2024-12-05 09:42:22.378021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.865 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.865 [2024-12-05 09:42:22.394134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.865 [2024-12-05 09:42:22.394190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.865 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.865 [2024-12-05 09:42:22.410562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.865 [2024-12-05 09:42:22.410601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.865 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.865 [2024-12-05 09:42:22.428668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.865 [2024-12-05 09:42:22.428709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.865 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.865 [2024-12-05 09:42:22.444912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.865 [2024-12-05 09:42:22.444978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.865 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.865 [2024-12-05 09:42:22.457789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.865 [2024-12-05 09:42:22.457861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.865 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.865 [2024-12-05 09:42:22.475036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.865 [2024-12-05 09:42:22.475077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.865 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:34.865 [2024-12-05 09:42:22.491489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.865 [2024-12-05 09:42:22.491528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.124 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.124 [2024-12-05 09:42:22.506791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.124 [2024-12-05 09:42:22.506830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.124 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.124 [2024-12-05 09:42:22.522427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.124 [2024-12-05 09:42:22.522468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.124 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.124 [2024-12-05 09:42:22.532977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.124 [2024-12-05 09:42:22.533031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.124 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.124 [2024-12-05 09:42:22.547772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.124 [2024-12-05 09:42:22.547812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.124 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.124 [2024-12-05 09:42:22.564584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.124 [2024-12-05 09:42:22.564624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.124 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.124 [2024-12-05 09:42:22.579422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.124 [2024-12-05 09:42:22.579463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.124 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.124 [2024-12-05 09:42:22.595979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.124 [2024-12-05 09:42:22.596034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.124 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.124 [2024-12-05 09:42:22.611219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.124 [2024-12-05 09:42:22.611259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.124 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.124 [2024-12-05 09:42:22.627604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.124 [2024-12-05 09:42:22.627645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.124 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.124 [2024-12-05 09:42:22.643884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.124 [2024-12-05 09:42:22.643923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.124 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.124 [2024-12-05 09:42:22.660268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.124 [2024-12-05 09:42:22.660309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.124 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.124 [2024-12-05 09:42:22.676685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.124 [2024-12-05 09:42:22.676725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.124 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.124 [2024-12-05 09:42:22.691616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.124 [2024-12-05 09:42:22.691656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.124 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.124 [2024-12-05 09:42:22.707857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.124 [2024-12-05 09:42:22.707897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.124 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.124 [2024-12-05 09:42:22.723714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.125 [2024-12-05 09:42:22.723755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.125 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.125 [2024-12-05 09:42:22.735062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.125 [2024-12-05 09:42:22.735123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.125 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.125 [2024-12-05 09:42:22.750724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.125 [2024-12-05 09:42:22.750765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.384 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.384 [2024-12-05 09:42:22.765947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.384 [2024-12-05 09:42:22.766016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.384 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.384 [2024-12-05 09:42:22.782859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.384 [2024-12-05 09:42:22.782898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.384 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.384 [2024-12-05 09:42:22.798232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.384 [2024-12-05 09:42:22.798273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.384 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.384 [2024-12-05 09:42:22.811594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.384 [2024-12-05 09:42:22.811646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.384 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.384 [2024-12-05 09:42:22.827834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.384 [2024-12-05 09:42:22.827875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.384 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.384 [2024-12-05 09:42:22.844683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.384 [2024-12-05 09:42:22.844724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.384 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.384 [2024-12-05 09:42:22.859804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.384 [2024-12-05 09:42:22.859843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.384 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.384 [2024-12-05 09:42:22.875309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.384 [2024-12-05 09:42:22.875350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.384 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.384 [2024-12-05 09:42:22.886429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.384 [2024-12-05 09:42:22.886469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.384 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.384 [2024-12-05 09:42:22.901788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.384 [2024-12-05 09:42:22.901844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.384 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.384 [2024-12-05 09:42:22.918141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.384 [2024-12-05 09:42:22.918179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.384 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.384 [2024-12-05 09:42:22.936270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.384 [2024-12-05 09:42:22.936310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.384 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.384 [2024-12-05 09:42:22.951452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.384 [2024-12-05 09:42:22.951493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.384 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.384 10840.00 IOPS, 84.69 MiB/s [2024-12-05T09:42:23.013Z] [2024-12-05 09:42:22.966641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.384 [2024-12-05 09:42:22.966681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.384 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.384 00:10:35.385 Latency(us) 00:10:35.385 [2024-12-05T09:42:23.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:35.385 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:35.385 Nvme1n1 : 5.01 10840.52 84.69 0.00 0.00 11789.68 4885.41 22997.18 00:10:35.385 [2024-12-05T09:42:23.014Z] =================================================================================================================== 00:10:35.385 [2024-12-05T09:42:23.014Z] Total : 10840.52 84.69 0.00 0.00 11789.68 4885.41 22997.18 00:10:35.385 [2024-12-05 09:42:22.975688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.385 [2024-12-05 09:42:22.975742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.385 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.385 [2024-12-05 09:42:22.987700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.385 [2024-12-05 09:42:22.987754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.385 2024/12/05 09:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.385 [2024-12-05 09:42:22.999710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.385 [2024-12-05 09:42:22.999762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.385 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.011693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.011760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.023708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.023758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.035694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.035744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.047717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.047768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.059712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.059763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.071702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.071753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.083720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.083771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.095723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.095774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.107708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.107757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.115733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.115787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.123714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.123764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.131752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.131802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.143736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.143788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.155725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.155774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.167751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.167801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.179744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.179795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.191742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.191792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.203750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.203800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.215744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.215794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.227759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.227809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.239767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.239816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.251753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.251802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.645 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.645 [2024-12-05 09:42:23.263776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.645 [2024-12-05 09:42:23.263834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.646 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.905 [2024-12-05 09:42:23.275790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.905 [2024-12-05 09:42:23.275839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.905 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.905 [2024-12-05 09:42:23.287775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.905 [2024-12-05 09:42:23.287824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.905 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.905 [2024-12-05 09:42:23.299796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.905 [2024-12-05 09:42:23.299845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.905 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.905 [2024-12-05 09:42:23.311802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.905 [2024-12-05 09:42:23.311850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.905 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.905 [2024-12-05 09:42:23.323823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.905 [2024-12-05 09:42:23.323874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.905 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.905 [2024-12-05 09:42:23.335806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.905 [2024-12-05 09:42:23.335855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.905 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.905 [2024-12-05 09:42:23.347853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.905 [2024-12-05 09:42:23.347915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.905 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.905 [2024-12-05 09:42:23.359849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.906 [2024-12-05 09:42:23.359902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.906 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.906 [2024-12-05 09:42:23.371856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.906 [2024-12-05 09:42:23.371907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.906 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.906 [2024-12-05 09:42:23.383802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.906 [2024-12-05 09:42:23.383851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.906 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.906 [2024-12-05 09:42:23.395829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.906 [2024-12-05 09:42:23.395878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.906 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.906 [2024-12-05 09:42:23.407811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.906 [2024-12-05 09:42:23.407861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.906 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.906 [2024-12-05 09:42:23.419844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.906 [2024-12-05 09:42:23.419896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.906 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.906 [2024-12-05 09:42:23.431836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.906 [2024-12-05 09:42:23.431887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.906 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.906 [2024-12-05 09:42:23.443820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.906 [2024-12-05 09:42:23.443870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.906 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.906 [2024-12-05 09:42:23.455840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.906 [2024-12-05 09:42:23.455891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.906 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.906 [2024-12-05 09:42:23.467854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.906 [2024-12-05 09:42:23.467904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.906 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.906 [2024-12-05 09:42:23.479859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.906 [2024-12-05 09:42:23.479910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.906 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.906 [2024-12-05 09:42:23.487845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.906 [2024-12-05 09:42:23.487893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.906 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.906 [2024-12-05 09:42:23.495835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.906 [2024-12-05 09:42:23.495882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.906 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.906 [2024-12-05 09:42:23.507860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.906 [2024-12-05 09:42:23.507910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.906 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.906 [2024-12-05 09:42:23.519873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.906 [2024-12-05 09:42:23.519924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.906 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:35.906 [2024-12-05 09:42:23.531856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.906 [2024-12-05 09:42:23.531905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.166 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.166 [2024-12-05 09:42:23.543869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.166 [2024-12-05 09:42:23.543920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.166 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.166 [2024-12-05 09:42:23.555889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.166 [2024-12-05 09:42:23.555965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.166 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.166 [2024-12-05 09:42:23.567864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.166 [2024-12-05 09:42:23.567913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.166 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.166 [2024-12-05 09:42:23.579880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.166 [2024-12-05 09:42:23.579957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.166 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.166 [2024-12-05 09:42:23.591870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.166 [2024-12-05 09:42:23.591920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.166 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.166 [2024-12-05 09:42:23.603920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.166 [2024-12-05 09:42:23.604001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.166 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.166 [2024-12-05 09:42:23.615892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.166 [2024-12-05 09:42:23.615969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.166 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.166 [2024-12-05 09:42:23.627886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.166 [2024-12-05 09:42:23.627947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.166 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.166 [2024-12-05 09:42:23.639898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.166 [2024-12-05 09:42:23.639959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.166 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.166 [2024-12-05 09:42:23.651903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.166 [2024-12-05 09:42:23.651966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.166 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.166 [2024-12-05 09:42:23.663884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.166 [2024-12-05 09:42:23.663946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.166 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.166 [2024-12-05 09:42:23.675944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.166 [2024-12-05 09:42:23.675993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.166 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.166 [2024-12-05 09:42:23.687888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.166 [2024-12-05 09:42:23.687949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.167 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.167 [2024-12-05 09:42:23.699922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.167 [2024-12-05 09:42:23.699985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.167 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.167 [2024-12-05 09:42:23.711921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.167 [2024-12-05 09:42:23.711985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.167 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.167 [2024-12-05 09:42:23.723900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.167 [2024-12-05 09:42:23.723961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.167 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.167 [2024-12-05 09:42:23.735923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.167 [2024-12-05 09:42:23.735985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.167 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.167 [2024-12-05 09:42:23.747957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.167 [2024-12-05 09:42:23.748006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.167 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.167 [2024-12-05 09:42:23.759913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.167 [2024-12-05 09:42:23.759990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.167 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.167 [2024-12-05 09:42:23.771967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.167 [2024-12-05 09:42:23.772017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.167 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.167 [2024-12-05 09:42:23.783981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.167 [2024-12-05 09:42:23.784032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.167 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.426 [2024-12-05 09:42:23.795970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.426 [2024-12-05 09:42:23.796020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.426 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.426 [2024-12-05 09:42:23.807968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.426 [2024-12-05 09:42:23.808017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.426 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.426 [2024-12-05 09:42:23.819960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.426 [2024-12-05 09:42:23.820009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.426 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.426 [2024-12-05 09:42:23.831977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.426 [2024-12-05 09:42:23.832026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.426 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.426 [2024-12-05 09:42:23.843994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.426 [2024-12-05 09:42:23.844045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.426 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.426 [2024-12-05 09:42:23.855971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.426 [2024-12-05 09:42:23.856022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.426 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.426 [2024-12-05 09:42:23.867989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.426 [2024-12-05 09:42:23.868038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.426 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.426 [2024-12-05 09:42:23.880080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.426 [2024-12-05 09:42:23.880145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.426 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.426 [2024-12-05 09:42:23.891996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.426 [2024-12-05 09:42:23.892049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.426 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.426 [2024-12-05 09:42:23.904042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.426 [2024-12-05 09:42:23.904079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.426 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.426 [2024-12-05 09:42:23.916029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.426 [2024-12-05 09:42:23.916064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.426 2024/12/05 09:42:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:36.426 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (70507) - No such process 00:10:36.426 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 70507 00:10:36.426 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.426 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.426 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.426 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.426 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:36.426 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.426 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.426 delay0 00:10:36.426 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.426 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:36.426 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.426 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.426 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.426 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:36.685 [2024-12-05 09:42:24.187688] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:43.248 Initializing NVMe Controllers 00:10:43.248 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:43.248 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:43.248 Initialization complete. Launching workers. 00:10:43.248 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 109 00:10:43.248 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 396, failed to submit 33 00:10:43.248 success 223, unsuccessful 173, failed 0 00:10:43.248 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:43.248 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:43.248 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:43.248 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:43.248 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:43.248 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:43.248 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.248 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:43.248 rmmod nvme_tcp 00:10:43.248 rmmod nvme_fabrics 00:10:43.248 rmmod nvme_keyring 00:10:43.248 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.248 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:43.248 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:43.248 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 70326 ']' 00:10:43.248 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 70326 00:10:43.249 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 70326 ']' 00:10:43.249 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 70326 00:10:43.249 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:43.249 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.249 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70326 00:10:43.249 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:43.249 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:43.249 killing process with pid 70326 00:10:43.249 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70326' 00:10:43.249 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 70326 00:10:43.249 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 70326 00:10:43.814 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:43.814 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:43.814 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:43.814 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:43.814 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:43.814 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:43.814 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:43.814 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:43.814 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:43.814 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:43.814 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:43.814 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:44.072 00:10:44.072 real 0m28.054s 00:10:44.072 user 0m44.629s 00:10:44.072 sys 0m7.589s 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.072 ************************************ 00:10:44.072 END TEST nvmf_zcopy 00:10:44.072 ************************************ 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.072 09:42:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:44.330 ************************************ 00:10:44.330 START TEST nvmf_nmic 00:10:44.330 ************************************ 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:44.330 * Looking for test storage... 00:10:44.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.330 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:44.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.330 --rc genhtml_branch_coverage=1 00:10:44.331 --rc genhtml_function_coverage=1 00:10:44.331 --rc genhtml_legend=1 00:10:44.331 --rc geninfo_all_blocks=1 00:10:44.331 --rc geninfo_unexecuted_blocks=1 00:10:44.331 00:10:44.331 ' 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:44.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.331 --rc genhtml_branch_coverage=1 00:10:44.331 --rc genhtml_function_coverage=1 00:10:44.331 --rc genhtml_legend=1 00:10:44.331 --rc geninfo_all_blocks=1 00:10:44.331 --rc geninfo_unexecuted_blocks=1 00:10:44.331 00:10:44.331 ' 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:44.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.331 --rc genhtml_branch_coverage=1 00:10:44.331 --rc genhtml_function_coverage=1 00:10:44.331 --rc genhtml_legend=1 00:10:44.331 --rc geninfo_all_blocks=1 00:10:44.331 --rc geninfo_unexecuted_blocks=1 00:10:44.331 00:10:44.331 ' 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:44.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.331 --rc genhtml_branch_coverage=1 00:10:44.331 --rc genhtml_function_coverage=1 00:10:44.331 --rc genhtml_legend=1 00:10:44.331 --rc geninfo_all_blocks=1 00:10:44.331 --rc geninfo_unexecuted_blocks=1 00:10:44.331 00:10:44.331 ' 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.331 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:44.331 Cannot find device "nvmf_init_br" 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:44.331 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:44.589 Cannot find device "nvmf_init_br2" 00:10:44.589 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:44.589 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:44.589 Cannot find device "nvmf_tgt_br" 00:10:44.589 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:44.589 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:44.589 Cannot find device "nvmf_tgt_br2" 00:10:44.589 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:44.589 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:44.589 Cannot find device "nvmf_init_br" 00:10:44.589 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:44.589 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:44.590 Cannot find device "nvmf_init_br2" 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:44.590 Cannot find device "nvmf_tgt_br" 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:44.590 Cannot find device "nvmf_tgt_br2" 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:44.590 Cannot find device "nvmf_br" 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:44.590 Cannot find device "nvmf_init_if" 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:44.590 Cannot find device "nvmf_init_if2" 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:44.590 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:44.590 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:44.590 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:44.848 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:44.848 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:10:44.848 00:10:44.848 --- 10.0.0.3 ping statistics --- 00:10:44.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.848 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:44.848 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:44.848 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:10:44.848 00:10:44.848 --- 10.0.0.4 ping statistics --- 00:10:44.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.848 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:44.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:10:44.848 00:10:44.848 --- 10.0.0.1 ping statistics --- 00:10:44.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.848 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:44.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:10:44.848 00:10:44.848 --- 10.0.0.2 ping statistics --- 00:10:44.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.848 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=70912 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 70912 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 70912 ']' 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.848 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.848 [2024-12-05 09:42:32.457290] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:10:44.848 [2024-12-05 09:42:32.457546] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.107 [2024-12-05 09:42:32.652794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.365 [2024-12-05 09:42:32.814614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.365 [2024-12-05 09:42:32.814685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.365 [2024-12-05 09:42:32.814705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.365 [2024-12-05 09:42:32.814716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.365 [2024-12-05 09:42:32.814730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.365 [2024-12-05 09:42:32.816958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.365 [2024-12-05 09:42:32.817082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.365 [2024-12-05 09:42:32.817224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:45.365 [2024-12-05 09:42:32.817401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.933 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.933 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:45.933 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:45.933 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:45.933 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.933 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.933 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:45.933 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.933 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.933 [2024-12-05 09:42:33.510709] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.933 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.933 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:45.933 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.933 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.194 Malloc0 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.194 [2024-12-05 09:42:33.624533] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.194 test case1: single bdev can't be used in multiple subsystems 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.194 [2024-12-05 09:42:33.648261] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:46.194 [2024-12-05 09:42:33.648325] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:46.194 [2024-12-05 09:42:33.648343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.194 2024/12/05 09:42:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.194 request: 00:10:46.194 { 00:10:46.194 "method": "nvmf_subsystem_add_ns", 00:10:46.194 "params": { 00:10:46.194 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:46.194 "namespace": { 00:10:46.194 "bdev_name": "Malloc0", 00:10:46.194 "no_auto_visible": false, 00:10:46.194 "hide_metadata": false 00:10:46.194 } 00:10:46.194 } 00:10:46.194 } 00:10:46.194 Got JSON-RPC error response 00:10:46.194 GoRPCClient: error on JSON-RPC call 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:46.194 Adding namespace failed - expected result. 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:46.194 test case2: host connect to nvmf target in multiple paths 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.194 [2024-12-05 09:42:33.660502] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.194 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:46.454 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:46.454 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:46.454 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:46.454 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.454 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:46.454 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:48.984 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:48.984 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:48.984 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:48.984 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:48.984 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:48.984 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:48.984 09:42:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:48.984 [global] 00:10:48.984 thread=1 00:10:48.984 invalidate=1 00:10:48.984 rw=write 00:10:48.984 time_based=1 00:10:48.984 runtime=1 00:10:48.984 ioengine=libaio 00:10:48.984 direct=1 00:10:48.984 bs=4096 00:10:48.984 iodepth=1 00:10:48.984 norandommap=0 00:10:48.984 numjobs=1 00:10:48.984 00:10:48.984 verify_dump=1 00:10:48.984 verify_backlog=512 00:10:48.984 verify_state_save=0 00:10:48.984 do_verify=1 00:10:48.984 verify=crc32c-intel 00:10:48.984 [job0] 00:10:48.984 filename=/dev/nvme0n1 00:10:48.984 Could not set queue depth (nvme0n1) 00:10:48.984 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.984 fio-3.35 00:10:48.984 Starting 1 thread 00:10:49.920 00:10:49.920 job0: (groupid=0, jobs=1): err= 0: pid=71022: Thu Dec 5 09:42:37 2024 00:10:49.920 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:49.920 slat (nsec): min=12884, max=80460, avg=15917.02, stdev=5844.04 00:10:49.920 clat (usec): min=152, max=426, avg=186.85, stdev=19.99 00:10:49.920 lat (usec): min=165, max=441, avg=202.77, stdev=20.98 00:10:49.920 clat percentiles (usec): 00:10:49.920 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:10:49.920 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:10:49.920 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 223], 00:10:49.920 | 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 293], 99.95th=[ 297], 00:10:49.920 | 99.99th=[ 429] 00:10:49.920 write: IOPS=3054, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec); 0 zone resets 00:10:49.920 slat (usec): min=17, max=128, avg=22.49, stdev= 7.48 00:10:49.920 clat (usec): min=107, max=323, avg=132.14, stdev=16.73 00:10:49.920 lat (usec): min=127, max=404, avg=154.63, stdev=19.28 00:10:49.920 clat percentiles (usec): 00:10:49.920 | 1.00th=[ 112], 5.00th=[ 115], 10.00th=[ 117], 20.00th=[ 121], 00:10:49.920 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 128], 60.00th=[ 131], 00:10:49.920 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 155], 95.00th=[ 165], 00:10:49.920 | 99.00th=[ 190], 99.50th=[ 200], 99.90th=[ 225], 99.95th=[ 277], 00:10:49.920 | 99.99th=[ 326] 00:10:49.920 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:49.920 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:49.920 lat (usec) : 250=99.52%, 500=0.48% 00:10:49.920 cpu : usr=1.30%, sys=8.50%, ctx=5618, majf=0, minf=5 00:10:49.920 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.920 issued rwts: total=2560,3058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.920 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.920 00:10:49.920 Run status group 0 (all jobs): 00:10:49.920 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:10:49.920 WRITE: bw=11.9MiB/s (12.5MB/s), 11.9MiB/s-11.9MiB/s (12.5MB/s-12.5MB/s), io=11.9MiB (12.5MB), run=1001-1001msec 00:10:49.920 00:10:49.920 Disk stats (read/write): 00:10:49.920 nvme0n1: ios=2486/2560, merge=0/0, ticks=504/362, in_queue=866, util=91.78% 00:10:49.920 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:50.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:50.179 rmmod nvme_tcp 00:10:50.179 rmmod nvme_fabrics 00:10:50.179 rmmod nvme_keyring 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 70912 ']' 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 70912 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 70912 ']' 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 70912 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70912 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.179 killing process with pid 70912 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70912' 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 70912 00:10:50.179 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 70912 00:10:51.558 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:51.558 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:51.558 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:51.558 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:51.558 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:51.558 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:51.558 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:51.558 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:51.558 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:51.558 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:51.558 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:51.558 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:51.558 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:51.558 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:51.558 09:42:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:51.558 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:51.558 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:51.558 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:51.558 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:51.558 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:51.558 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:51.559 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:51.559 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:51.559 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.559 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.559 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.559 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:51.559 00:10:51.559 real 0m7.476s 00:10:51.559 user 0m23.378s 00:10:51.559 sys 0m1.620s 00:10:51.559 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.559 ************************************ 00:10:51.559 END TEST nvmf_nmic 00:10:51.559 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.559 ************************************ 00:10:51.818 09:42:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:51.818 09:42:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:51.818 09:42:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.818 09:42:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:51.818 ************************************ 00:10:51.818 START TEST nvmf_fio_target 00:10:51.818 ************************************ 00:10:51.818 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:51.818 * Looking for test storage... 00:10:51.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:51.818 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:51.818 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:51.818 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:51.818 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:51.818 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.818 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.818 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.818 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.818 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.818 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:51.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.819 --rc genhtml_branch_coverage=1 00:10:51.819 --rc genhtml_function_coverage=1 00:10:51.819 --rc genhtml_legend=1 00:10:51.819 --rc geninfo_all_blocks=1 00:10:51.819 --rc geninfo_unexecuted_blocks=1 00:10:51.819 00:10:51.819 ' 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:51.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.819 --rc genhtml_branch_coverage=1 00:10:51.819 --rc genhtml_function_coverage=1 00:10:51.819 --rc genhtml_legend=1 00:10:51.819 --rc geninfo_all_blocks=1 00:10:51.819 --rc geninfo_unexecuted_blocks=1 00:10:51.819 00:10:51.819 ' 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:51.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.819 --rc genhtml_branch_coverage=1 00:10:51.819 --rc genhtml_function_coverage=1 00:10:51.819 --rc genhtml_legend=1 00:10:51.819 --rc geninfo_all_blocks=1 00:10:51.819 --rc geninfo_unexecuted_blocks=1 00:10:51.819 00:10:51.819 ' 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:51.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.819 --rc genhtml_branch_coverage=1 00:10:51.819 --rc genhtml_function_coverage=1 00:10:51.819 --rc genhtml_legend=1 00:10:51.819 --rc geninfo_all_blocks=1 00:10:51.819 --rc geninfo_unexecuted_blocks=1 00:10:51.819 00:10:51.819 ' 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:51.819 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:51.820 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:51.820 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:52.079 Cannot find device "nvmf_init_br" 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:52.079 Cannot find device "nvmf_init_br2" 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:52.079 Cannot find device "nvmf_tgt_br" 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:52.079 Cannot find device "nvmf_tgt_br2" 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:52.079 Cannot find device "nvmf_init_br" 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:52.079 Cannot find device "nvmf_init_br2" 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:52.079 Cannot find device "nvmf_tgt_br" 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:52.079 Cannot find device "nvmf_tgt_br2" 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:52.079 Cannot find device "nvmf_br" 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:52.079 Cannot find device "nvmf_init_if" 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:52.079 Cannot find device "nvmf_init_if2" 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:52.079 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:52.079 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:52.079 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:52.338 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:52.339 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:52.339 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:10:52.339 00:10:52.339 --- 10.0.0.3 ping statistics --- 00:10:52.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.339 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:52.339 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:52.339 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:10:52.339 00:10:52.339 --- 10.0.0.4 ping statistics --- 00:10:52.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.339 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:52.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:52.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:10:52.339 00:10:52.339 --- 10.0.0.1 ping statistics --- 00:10:52.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.339 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:52.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:52.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:10:52.339 00:10:52.339 --- 10.0.0.2 ping statistics --- 00:10:52.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.339 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=71278 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 71278 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 71278 ']' 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.339 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.598 [2024-12-05 09:42:40.011248] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:10:52.598 [2024-12-05 09:42:40.011428] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.598 [2024-12-05 09:42:40.207033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.857 [2024-12-05 09:42:40.325861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.857 [2024-12-05 09:42:40.325926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.857 [2024-12-05 09:42:40.325985] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.857 [2024-12-05 09:42:40.325998] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.857 [2024-12-05 09:42:40.326011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.857 [2024-12-05 09:42:40.328146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.857 [2024-12-05 09:42:40.328244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.857 [2024-12-05 09:42:40.328784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.857 [2024-12-05 09:42:40.330172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.426 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.426 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:53.426 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:53.426 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:53.426 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.426 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.426 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:53.688 [2024-12-05 09:42:41.301996] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.946 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:54.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:54.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:54.462 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:54.462 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:55.029 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:55.029 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:55.029 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:55.029 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:55.597 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:55.856 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:55.857 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:56.116 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:56.116 09:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:56.686 09:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:56.686 09:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:56.686 09:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:56.944 09:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:56.944 09:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:57.203 09:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:57.203 09:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:57.462 09:42:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:57.719 [2024-12-05 09:42:45.256714] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:57.719 09:42:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:57.977 09:42:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:58.235 09:42:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:58.494 09:42:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:58.494 09:42:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:58.494 09:42:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:58.494 09:42:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:58.494 09:42:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:58.494 09:42:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:00.396 09:42:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:00.397 09:42:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:00.397 09:42:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:00.397 09:42:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:00.397 09:42:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:00.397 09:42:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:00.397 09:42:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:00.397 [global] 00:11:00.397 thread=1 00:11:00.397 invalidate=1 00:11:00.397 rw=write 00:11:00.397 time_based=1 00:11:00.397 runtime=1 00:11:00.397 ioengine=libaio 00:11:00.397 direct=1 00:11:00.397 bs=4096 00:11:00.397 iodepth=1 00:11:00.397 norandommap=0 00:11:00.397 numjobs=1 00:11:00.397 00:11:00.397 verify_dump=1 00:11:00.397 verify_backlog=512 00:11:00.397 verify_state_save=0 00:11:00.397 do_verify=1 00:11:00.397 verify=crc32c-intel 00:11:00.397 [job0] 00:11:00.397 filename=/dev/nvme0n1 00:11:00.397 [job1] 00:11:00.397 filename=/dev/nvme0n2 00:11:00.397 [job2] 00:11:00.397 filename=/dev/nvme0n3 00:11:00.397 [job3] 00:11:00.397 filename=/dev/nvme0n4 00:11:00.657 Could not set queue depth (nvme0n1) 00:11:00.657 Could not set queue depth (nvme0n2) 00:11:00.657 Could not set queue depth (nvme0n3) 00:11:00.657 Could not set queue depth (nvme0n4) 00:11:00.657 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.657 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.657 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.657 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.657 fio-3.35 00:11:00.657 Starting 4 threads 00:11:02.062 00:11:02.062 job0: (groupid=0, jobs=1): err= 0: pid=71576: Thu Dec 5 09:42:49 2024 00:11:02.062 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:02.062 slat (nsec): min=8938, max=73311, avg=19581.28, stdev=8724.45 00:11:02.062 clat (usec): min=304, max=2020, avg=539.25, stdev=98.60 00:11:02.062 lat (usec): min=335, max=2047, avg=558.83, stdev=101.00 00:11:02.062 clat percentiles (usec): 00:11:02.062 | 1.00th=[ 367], 5.00th=[ 437], 10.00th=[ 457], 20.00th=[ 478], 00:11:02.062 | 30.00th=[ 494], 40.00th=[ 510], 50.00th=[ 523], 60.00th=[ 537], 00:11:02.062 | 70.00th=[ 553], 80.00th=[ 578], 90.00th=[ 652], 95.00th=[ 725], 00:11:02.062 | 99.00th=[ 816], 99.50th=[ 930], 99.90th=[ 1106], 99.95th=[ 2024], 00:11:02.062 | 99.99th=[ 2024] 00:11:02.062 write: IOPS=1057, BW=4232KiB/s (4333kB/s)(4236KiB/1001msec); 0 zone resets 00:11:02.062 slat (nsec): min=12736, max=99583, avg=39668.16, stdev=9817.47 00:11:02.062 clat (usec): min=162, max=2810, avg=358.04, stdev=106.43 00:11:02.062 lat (usec): min=202, max=2847, avg=397.71, stdev=106.44 00:11:02.062 clat percentiles (usec): 00:11:02.062 | 1.00th=[ 231], 5.00th=[ 258], 10.00th=[ 273], 20.00th=[ 289], 00:11:02.062 | 30.00th=[ 302], 40.00th=[ 326], 50.00th=[ 347], 60.00th=[ 375], 00:11:02.062 | 70.00th=[ 396], 80.00th=[ 424], 90.00th=[ 449], 95.00th=[ 469], 00:11:02.062 | 99.00th=[ 529], 99.50th=[ 709], 99.90th=[ 840], 99.95th=[ 2802], 00:11:02.062 | 99.99th=[ 2802] 00:11:02.062 bw ( KiB/s): min= 4240, max= 4240, per=19.90%, avg=4240.00, stdev= 0.00, samples=1 00:11:02.062 iops : min= 1060, max= 1060, avg=1060.00, stdev= 0.00, samples=1 00:11:02.062 lat (usec) : 250=1.78%, 500=64.57%, 750=32.02%, 1000=1.49% 00:11:02.062 lat (msec) : 2=0.05%, 4=0.10% 00:11:02.062 cpu : usr=2.20%, sys=4.40%, ctx=2090, majf=0, minf=13 00:11:02.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.062 issued rwts: total=1024,1059,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.062 job1: (groupid=0, jobs=1): err= 0: pid=71577: Thu Dec 5 09:42:49 2024 00:11:02.062 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:02.062 slat (usec): min=9, max=109, avg=34.46, stdev=16.75 00:11:02.062 clat (usec): min=368, max=5268, avg=536.65, stdev=207.93 00:11:02.062 lat (usec): min=390, max=5309, avg=571.11, stdev=209.95 00:11:02.062 clat percentiles (usec): 00:11:02.062 | 1.00th=[ 392], 5.00th=[ 441], 10.00th=[ 453], 20.00th=[ 469], 00:11:02.062 | 30.00th=[ 486], 40.00th=[ 502], 50.00th=[ 510], 60.00th=[ 529], 00:11:02.062 | 70.00th=[ 545], 80.00th=[ 570], 90.00th=[ 619], 95.00th=[ 660], 00:11:02.062 | 99.00th=[ 857], 99.50th=[ 1139], 99.90th=[ 3785], 99.95th=[ 5276], 00:11:02.062 | 99.99th=[ 5276] 00:11:02.062 write: IOPS=1027, BW=4112KiB/s (4211kB/s)(4116KiB/1001msec); 0 zone resets 00:11:02.062 slat (nsec): min=24604, max=95819, avg=44854.30, stdev=8677.34 00:11:02.062 clat (usec): min=180, max=984, avg=350.88, stdev=82.91 00:11:02.062 lat (usec): min=211, max=1021, avg=395.73, stdev=83.91 00:11:02.062 clat percentiles (usec): 00:11:02.062 | 1.00th=[ 223], 5.00th=[ 243], 10.00th=[ 258], 20.00th=[ 277], 00:11:02.062 | 30.00th=[ 289], 40.00th=[ 310], 50.00th=[ 343], 60.00th=[ 383], 00:11:02.062 | 70.00th=[ 404], 80.00th=[ 420], 90.00th=[ 445], 95.00th=[ 469], 00:11:02.062 | 99.00th=[ 523], 99.50th=[ 709], 99.90th=[ 791], 99.95th=[ 988], 00:11:02.062 | 99.99th=[ 988] 00:11:02.062 bw ( KiB/s): min= 4328, max= 4328, per=20.31%, avg=4328.00, stdev= 0.00, samples=1 00:11:02.062 iops : min= 1082, max= 1082, avg=1082.00, stdev= 0.00, samples=1 00:11:02.062 lat (usec) : 250=3.75%, 500=65.32%, 750=29.86%, 1000=0.78% 00:11:02.062 lat (msec) : 2=0.15%, 4=0.10%, 10=0.05% 00:11:02.062 cpu : usr=1.90%, sys=6.20%, ctx=2139, majf=0, minf=7 00:11:02.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.062 issued rwts: total=1024,1029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.062 job2: (groupid=0, jobs=1): err= 0: pid=71578: Thu Dec 5 09:42:49 2024 00:11:02.062 read: IOPS=1933, BW=7732KiB/s (7918kB/s)(7740KiB/1001msec) 00:11:02.063 slat (nsec): min=12672, max=63210, avg=16653.03, stdev=4925.72 00:11:02.063 clat (usec): min=189, max=790, avg=256.53, stdev=30.50 00:11:02.063 lat (usec): min=212, max=817, avg=273.18, stdev=31.33 00:11:02.063 clat percentiles (usec): 00:11:02.063 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 235], 00:11:02.063 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 260], 00:11:02.063 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 306], 00:11:02.063 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 676], 99.95th=[ 791], 00:11:02.063 | 99.99th=[ 791] 00:11:02.063 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:02.063 slat (usec): min=17, max=108, avg=24.24, stdev= 7.85 00:11:02.063 clat (usec): min=135, max=396, avg=202.71, stdev=28.74 00:11:02.063 lat (usec): min=155, max=488, avg=226.94, stdev=32.48 00:11:02.063 clat percentiles (usec): 00:11:02.063 | 1.00th=[ 153], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 182], 00:11:02.063 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:11:02.063 | 70.00th=[ 212], 80.00th=[ 223], 90.00th=[ 241], 95.00th=[ 255], 00:11:02.063 | 99.00th=[ 289], 99.50th=[ 310], 99.90th=[ 396], 99.95th=[ 396], 00:11:02.063 | 99.99th=[ 396] 00:11:02.063 bw ( KiB/s): min= 8192, max= 8192, per=38.44%, avg=8192.00, stdev= 0.00, samples=1 00:11:02.063 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:02.063 lat (usec) : 250=70.55%, 500=29.37%, 750=0.05%, 1000=0.03% 00:11:02.063 cpu : usr=1.60%, sys=5.90%, ctx=3991, majf=0, minf=11 00:11:02.063 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.063 issued rwts: total=1935,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.063 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.063 job3: (groupid=0, jobs=1): err= 0: pid=71579: Thu Dec 5 09:42:49 2024 00:11:02.063 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:02.063 slat (usec): min=9, max=108, avg=22.17, stdev=10.12 00:11:02.063 clat (usec): min=233, max=7626, avg=538.62, stdev=359.99 00:11:02.063 lat (usec): min=254, max=7661, avg=560.79, stdev=360.96 00:11:02.063 clat percentiles (usec): 00:11:02.063 | 1.00th=[ 247], 5.00th=[ 269], 10.00th=[ 306], 20.00th=[ 457], 00:11:02.063 | 30.00th=[ 478], 40.00th=[ 502], 50.00th=[ 519], 60.00th=[ 537], 00:11:02.063 | 70.00th=[ 553], 80.00th=[ 586], 90.00th=[ 693], 95.00th=[ 766], 00:11:02.063 | 99.00th=[ 865], 99.50th=[ 1012], 99.90th=[ 6980], 99.95th=[ 7635], 00:11:02.063 | 99.99th=[ 7635] 00:11:02.063 write: IOPS=1195, BW=4783KiB/s (4898kB/s)(4788KiB/1001msec); 0 zone resets 00:11:02.063 slat (usec): min=13, max=111, avg=27.80, stdev= 7.86 00:11:02.063 clat (usec): min=169, max=3458, avg=323.02, stdev=162.74 00:11:02.063 lat (usec): min=194, max=3487, avg=350.82, stdev=162.95 00:11:02.063 clat percentiles (usec): 00:11:02.063 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 210], 00:11:02.063 | 30.00th=[ 225], 40.00th=[ 247], 50.00th=[ 285], 60.00th=[ 351], 00:11:02.063 | 70.00th=[ 404], 80.00th=[ 445], 90.00th=[ 474], 95.00th=[ 494], 00:11:02.063 | 99.00th=[ 529], 99.50th=[ 562], 99.90th=[ 3097], 99.95th=[ 3458], 00:11:02.063 | 99.99th=[ 3458] 00:11:02.063 bw ( KiB/s): min= 5640, max= 5640, per=26.47%, avg=5640.00, stdev= 0.00, samples=1 00:11:02.063 iops : min= 1410, max= 1410, avg=1410.00, stdev= 0.00, samples=1 00:11:02.063 lat (usec) : 250=22.83%, 500=47.46%, 750=26.79%, 1000=2.57% 00:11:02.063 lat (msec) : 2=0.05%, 4=0.23%, 10=0.09% 00:11:02.063 cpu : usr=1.50%, sys=4.40%, ctx=2306, majf=0, minf=5 00:11:02.063 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.063 issued rwts: total=1024,1197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.063 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.063 00:11:02.063 Run status group 0 (all jobs): 00:11:02.063 READ: bw=19.5MiB/s (20.5MB/s), 4092KiB/s-7732KiB/s (4190kB/s-7918kB/s), io=19.6MiB (20.5MB), run=1001-1001msec 00:11:02.063 WRITE: bw=20.8MiB/s (21.8MB/s), 4112KiB/s-8184KiB/s (4211kB/s-8380kB/s), io=20.8MiB (21.8MB), run=1001-1001msec 00:11:02.063 00:11:02.063 Disk stats (read/write): 00:11:02.063 nvme0n1: ios=823/1024, merge=0/0, ticks=468/382, in_queue=850, util=88.88% 00:11:02.063 nvme0n2: ios=806/1024, merge=0/0, ticks=472/372, in_queue=844, util=88.45% 00:11:02.063 nvme0n3: ios=1536/1879, merge=0/0, ticks=392/404, in_queue=796, util=89.12% 00:11:02.063 nvme0n4: ios=861/1024, merge=0/0, ticks=454/321, in_queue=775, util=88.62% 00:11:02.063 09:42:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:02.063 [global] 00:11:02.063 thread=1 00:11:02.063 invalidate=1 00:11:02.063 rw=randwrite 00:11:02.063 time_based=1 00:11:02.063 runtime=1 00:11:02.063 ioengine=libaio 00:11:02.063 direct=1 00:11:02.063 bs=4096 00:11:02.063 iodepth=1 00:11:02.063 norandommap=0 00:11:02.063 numjobs=1 00:11:02.063 00:11:02.063 verify_dump=1 00:11:02.063 verify_backlog=512 00:11:02.063 verify_state_save=0 00:11:02.063 do_verify=1 00:11:02.063 verify=crc32c-intel 00:11:02.063 [job0] 00:11:02.063 filename=/dev/nvme0n1 00:11:02.063 [job1] 00:11:02.063 filename=/dev/nvme0n2 00:11:02.063 [job2] 00:11:02.063 filename=/dev/nvme0n3 00:11:02.063 [job3] 00:11:02.063 filename=/dev/nvme0n4 00:11:02.063 Could not set queue depth (nvme0n1) 00:11:02.063 Could not set queue depth (nvme0n2) 00:11:02.063 Could not set queue depth (nvme0n3) 00:11:02.063 Could not set queue depth (nvme0n4) 00:11:02.063 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.063 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.063 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.063 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.063 fio-3.35 00:11:02.063 Starting 4 threads 00:11:03.442 00:11:03.442 job0: (groupid=0, jobs=1): err= 0: pid=71636: Thu Dec 5 09:42:50 2024 00:11:03.442 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:03.442 slat (nsec): min=12121, max=63218, avg=14938.87, stdev=4661.87 00:11:03.442 clat (usec): min=202, max=2247, avg=255.53, stdev=56.68 00:11:03.442 lat (usec): min=216, max=2262, avg=270.47, stdev=57.25 00:11:03.442 clat percentiles (usec): 00:11:03.442 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 233], 00:11:03.442 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:11:03.442 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 306], 00:11:03.442 | 99.00th=[ 367], 99.50th=[ 416], 99.90th=[ 611], 99.95th=[ 1012], 00:11:03.442 | 99.99th=[ 2245] 00:11:03.442 write: IOPS=2077, BW=8312KiB/s (8511kB/s)(8320KiB/1001msec); 0 zone resets 00:11:03.442 slat (nsec): min=18133, max=90665, avg=22557.28, stdev=6661.10 00:11:03.442 clat (usec): min=139, max=315, avg=188.67, stdev=23.45 00:11:03.442 lat (usec): min=159, max=335, avg=211.23, stdev=25.47 00:11:03.442 clat percentiles (usec): 00:11:03.442 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:11:03.442 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 192], 00:11:03.442 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 235], 00:11:03.442 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 297], 99.95th=[ 297], 00:11:03.442 | 99.99th=[ 318] 00:11:03.442 bw ( KiB/s): min= 8192, max= 8192, per=29.61%, avg=8192.00, stdev= 0.00, samples=1 00:11:03.442 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:03.442 lat (usec) : 250=75.70%, 500=24.15%, 750=0.10% 00:11:03.442 lat (msec) : 2=0.02%, 4=0.02% 00:11:03.442 cpu : usr=1.80%, sys=5.40%, ctx=4131, majf=0, minf=13 00:11:03.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.442 issued rwts: total=2048,2080,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.442 job1: (groupid=0, jobs=1): err= 0: pid=71637: Thu Dec 5 09:42:50 2024 00:11:03.442 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:03.442 slat (usec): min=15, max=104, avg=33.90, stdev=10.60 00:11:03.442 clat (usec): min=223, max=679, avg=417.63, stdev=52.43 00:11:03.442 lat (usec): min=241, max=722, avg=451.53, stdev=51.63 00:11:03.442 clat percentiles (usec): 00:11:03.442 | 1.00th=[ 273], 5.00th=[ 351], 10.00th=[ 359], 20.00th=[ 375], 00:11:03.442 | 30.00th=[ 388], 40.00th=[ 400], 50.00th=[ 416], 60.00th=[ 433], 00:11:03.442 | 70.00th=[ 445], 80.00th=[ 461], 90.00th=[ 482], 95.00th=[ 502], 00:11:03.442 | 99.00th=[ 562], 99.50th=[ 578], 99.90th=[ 660], 99.95th=[ 676], 00:11:03.442 | 99.99th=[ 676] 00:11:03.442 write: IOPS=1412, BW=5650KiB/s (5786kB/s)(5656KiB/1001msec); 0 zone resets 00:11:03.442 slat (usec): min=25, max=142, avg=42.38, stdev= 8.55 00:11:03.442 clat (usec): min=155, max=1852, avg=331.31, stdev=73.21 00:11:03.442 lat (usec): min=195, max=1903, avg=373.69, stdev=72.87 00:11:03.442 clat percentiles (usec): 00:11:03.442 | 1.00th=[ 204], 5.00th=[ 247], 10.00th=[ 262], 20.00th=[ 277], 00:11:03.442 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 318], 60.00th=[ 338], 00:11:03.442 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 416], 95.00th=[ 437], 00:11:03.442 | 99.00th=[ 469], 99.50th=[ 486], 99.90th=[ 545], 99.95th=[ 1860], 00:11:03.442 | 99.99th=[ 1860] 00:11:03.442 bw ( KiB/s): min= 5612, max= 5612, per=20.28%, avg=5612.00, stdev= 0.00, samples=1 00:11:03.442 iops : min= 1403, max= 1403, avg=1403.00, stdev= 0.00, samples=1 00:11:03.442 lat (usec) : 250=3.65%, 500=94.05%, 750=2.26% 00:11:03.442 lat (msec) : 2=0.04% 00:11:03.442 cpu : usr=2.40%, sys=7.00%, ctx=2439, majf=0, minf=9 00:11:03.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.442 issued rwts: total=1024,1414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.442 job2: (groupid=0, jobs=1): err= 0: pid=71638: Thu Dec 5 09:42:50 2024 00:11:03.442 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:03.442 slat (nsec): min=16460, max=81310, avg=24485.97, stdev=6859.93 00:11:03.442 clat (usec): min=336, max=1022, avg=436.03, stdev=53.97 00:11:03.442 lat (usec): min=369, max=1052, avg=460.52, stdev=54.92 00:11:03.442 clat percentiles (usec): 00:11:03.442 | 1.00th=[ 355], 5.00th=[ 371], 10.00th=[ 379], 20.00th=[ 392], 00:11:03.442 | 30.00th=[ 404], 40.00th=[ 416], 50.00th=[ 433], 60.00th=[ 445], 00:11:03.442 | 70.00th=[ 457], 80.00th=[ 474], 90.00th=[ 494], 95.00th=[ 529], 00:11:03.442 | 99.00th=[ 619], 99.50th=[ 652], 99.90th=[ 693], 99.95th=[ 1020], 00:11:03.442 | 99.99th=[ 1020] 00:11:03.442 write: IOPS=1380, BW=5522KiB/s (5655kB/s)(5528KiB/1001msec); 0 zone resets 00:11:03.442 slat (usec): min=26, max=108, avg=43.76, stdev= 9.01 00:11:03.442 clat (usec): min=167, max=2894, avg=333.18, stdev=97.62 00:11:03.443 lat (usec): min=217, max=2932, avg=376.94, stdev=97.80 00:11:03.443 clat percentiles (usec): 00:11:03.443 | 1.00th=[ 237], 5.00th=[ 255], 10.00th=[ 269], 20.00th=[ 281], 00:11:03.443 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 318], 60.00th=[ 334], 00:11:03.443 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 408], 95.00th=[ 424], 00:11:03.443 | 99.00th=[ 474], 99.50th=[ 537], 99.90th=[ 1762], 99.95th=[ 2900], 00:11:03.443 | 99.99th=[ 2900] 00:11:03.443 bw ( KiB/s): min= 5520, max= 5520, per=19.95%, avg=5520.00, stdev= 0.00, samples=1 00:11:03.443 iops : min= 1380, max= 1380, avg=1380.00, stdev= 0.00, samples=1 00:11:03.443 lat (usec) : 250=1.95%, 500=94.31%, 750=3.62% 00:11:03.443 lat (msec) : 2=0.08%, 4=0.04% 00:11:03.443 cpu : usr=1.70%, sys=6.60%, ctx=2407, majf=0, minf=14 00:11:03.443 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.443 issued rwts: total=1024,1382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.443 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.443 job3: (groupid=0, jobs=1): err= 0: pid=71639: Thu Dec 5 09:42:50 2024 00:11:03.443 read: IOPS=1588, BW=6354KiB/s (6506kB/s)(6360KiB/1001msec) 00:11:03.443 slat (nsec): min=11830, max=64064, avg=18009.00, stdev=4786.91 00:11:03.443 clat (usec): min=224, max=2800, avg=285.80, stdev=72.18 00:11:03.443 lat (usec): min=241, max=2819, avg=303.81, stdev=72.70 00:11:03.443 clat percentiles (usec): 00:11:03.443 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 255], 00:11:03.443 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 289], 00:11:03.443 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 330], 95.00th=[ 343], 00:11:03.443 | 99.00th=[ 375], 99.50th=[ 396], 99.90th=[ 873], 99.95th=[ 2802], 00:11:03.443 | 99.99th=[ 2802] 00:11:03.443 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:03.443 slat (nsec): min=16795, max=80896, avg=26319.96, stdev=6767.23 00:11:03.443 clat (usec): min=161, max=521, avg=222.50, stdev=31.97 00:11:03.443 lat (usec): min=179, max=563, avg=248.82, stdev=33.82 00:11:03.443 clat percentiles (usec): 00:11:03.443 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 198], 00:11:03.443 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 225], 00:11:03.443 | 70.00th=[ 233], 80.00th=[ 245], 90.00th=[ 265], 95.00th=[ 281], 00:11:03.443 | 99.00th=[ 318], 99.50th=[ 343], 99.90th=[ 388], 99.95th=[ 400], 00:11:03.443 | 99.99th=[ 523] 00:11:03.443 bw ( KiB/s): min= 8192, max= 8192, per=29.61%, avg=8192.00, stdev= 0.00, samples=1 00:11:03.443 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:03.443 lat (usec) : 250=52.56%, 500=47.36%, 750=0.03%, 1000=0.03% 00:11:03.443 lat (msec) : 4=0.03% 00:11:03.443 cpu : usr=1.50%, sys=6.50%, ctx=3638, majf=0, minf=11 00:11:03.443 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.443 issued rwts: total=1590,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.443 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.443 00:11:03.443 Run status group 0 (all jobs): 00:11:03.443 READ: bw=22.2MiB/s (23.3MB/s), 4092KiB/s-8184KiB/s (4190kB/s-8380kB/s), io=22.2MiB (23.3MB), run=1001-1001msec 00:11:03.443 WRITE: bw=27.0MiB/s (28.3MB/s), 5522KiB/s-8312KiB/s (5655kB/s-8511kB/s), io=27.0MiB (28.4MB), run=1001-1001msec 00:11:03.443 00:11:03.443 Disk stats (read/write): 00:11:03.443 nvme0n1: ios=1586/2043, merge=0/0, ticks=447/410, in_queue=857, util=88.28% 00:11:03.443 nvme0n2: ios=1068/1024, merge=0/0, ticks=478/360, in_queue=838, util=88.87% 00:11:03.443 nvme0n3: ios=1001/1024, merge=0/0, ticks=447/361, in_queue=808, util=89.05% 00:11:03.443 nvme0n4: ios=1524/1536, merge=0/0, ticks=444/361, in_queue=805, util=89.61% 00:11:03.443 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:03.443 [global] 00:11:03.443 thread=1 00:11:03.443 invalidate=1 00:11:03.443 rw=write 00:11:03.443 time_based=1 00:11:03.443 runtime=1 00:11:03.443 ioengine=libaio 00:11:03.443 direct=1 00:11:03.443 bs=4096 00:11:03.443 iodepth=128 00:11:03.443 norandommap=0 00:11:03.443 numjobs=1 00:11:03.443 00:11:03.443 verify_dump=1 00:11:03.443 verify_backlog=512 00:11:03.443 verify_state_save=0 00:11:03.443 do_verify=1 00:11:03.443 verify=crc32c-intel 00:11:03.443 [job0] 00:11:03.443 filename=/dev/nvme0n1 00:11:03.443 [job1] 00:11:03.443 filename=/dev/nvme0n2 00:11:03.443 [job2] 00:11:03.443 filename=/dev/nvme0n3 00:11:03.443 [job3] 00:11:03.443 filename=/dev/nvme0n4 00:11:03.443 Could not set queue depth (nvme0n1) 00:11:03.443 Could not set queue depth (nvme0n2) 00:11:03.443 Could not set queue depth (nvme0n3) 00:11:03.443 Could not set queue depth (nvme0n4) 00:11:03.443 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.443 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.443 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.443 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.443 fio-3.35 00:11:03.443 Starting 4 threads 00:11:04.823 00:11:04.823 job0: (groupid=0, jobs=1): err= 0: pid=71695: Thu Dec 5 09:42:52 2024 00:11:04.823 read: IOPS=3117, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1002msec) 00:11:04.823 slat (usec): min=4, max=6558, avg=153.25, stdev=830.17 00:11:04.824 clat (usec): min=961, max=28264, avg=19064.13, stdev=2640.87 00:11:04.824 lat (usec): min=5181, max=28344, avg=19217.38, stdev=2731.45 00:11:04.824 clat percentiles (usec): 00:11:04.824 | 1.00th=[ 5866], 5.00th=[15270], 10.00th=[16057], 20.00th=[17171], 00:11:04.824 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19530], 60.00th=[19792], 00:11:04.824 | 70.00th=[20055], 80.00th=[20579], 90.00th=[21365], 95.00th=[22152], 00:11:04.824 | 99.00th=[25035], 99.50th=[25822], 99.90th=[26870], 99.95th=[27919], 00:11:04.824 | 99.99th=[28181] 00:11:04.824 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:11:04.824 slat (usec): min=13, max=6396, avg=137.38, stdev=631.72 00:11:04.824 clat (usec): min=10049, max=26186, avg=18612.99, stdev=2515.04 00:11:04.824 lat (usec): min=10071, max=26227, avg=18750.37, stdev=2572.83 00:11:04.824 clat percentiles (usec): 00:11:04.824 | 1.00th=[11469], 5.00th=[14746], 10.00th=[15270], 20.00th=[16188], 00:11:04.824 | 30.00th=[17433], 40.00th=[19006], 50.00th=[19268], 60.00th=[19530], 00:11:04.824 | 70.00th=[20055], 80.00th=[20579], 90.00th=[20841], 95.00th=[21365], 00:11:04.824 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25822], 99.95th=[26084], 00:11:04.824 | 99.99th=[26084] 00:11:04.824 bw ( KiB/s): min=12464, max=12464, per=23.99%, avg=12464.00, stdev= 0.00, samples=1 00:11:04.824 iops : min= 3116, max= 3116, avg=3116.00, stdev= 0.00, samples=1 00:11:04.824 lat (usec) : 1000=0.01% 00:11:04.824 lat (msec) : 10=0.73%, 20=65.86%, 50=33.39% 00:11:04.824 cpu : usr=3.20%, sys=10.39%, ctx=303, majf=0, minf=1 00:11:04.824 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:04.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.824 issued rwts: total=3124,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.824 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.824 job1: (groupid=0, jobs=1): err= 0: pid=71696: Thu Dec 5 09:42:52 2024 00:11:04.824 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:11:04.824 slat (usec): min=3, max=14127, avg=156.04, stdev=852.79 00:11:04.824 clat (usec): min=13055, max=49341, avg=21120.68, stdev=7877.99 00:11:04.824 lat (usec): min=13854, max=49354, avg=21276.73, stdev=7903.22 00:11:04.824 clat percentiles (usec): 00:11:04.824 | 1.00th=[13829], 5.00th=[15795], 10.00th=[17171], 20.00th=[17433], 00:11:04.824 | 30.00th=[17695], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:11:04.824 | 70.00th=[18482], 80.00th=[19530], 90.00th=[35914], 95.00th=[41157], 00:11:04.824 | 99.00th=[49021], 99.50th=[49546], 99.90th=[49546], 99.95th=[49546], 00:11:04.824 | 99.99th=[49546] 00:11:04.824 write: IOPS=3318, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1006msec); 0 zone resets 00:11:04.824 slat (usec): min=11, max=4988, avg=148.21, stdev=661.66 00:11:04.824 clat (usec): min=4524, max=39783, avg=18564.42, stdev=4485.18 00:11:04.824 lat (usec): min=5851, max=39807, avg=18712.64, stdev=4497.03 00:11:04.824 clat percentiles (usec): 00:11:04.824 | 1.00th=[13566], 5.00th=[14877], 10.00th=[15139], 20.00th=[15533], 00:11:04.824 | 30.00th=[15926], 40.00th=[16909], 50.00th=[18482], 60.00th=[19006], 00:11:04.824 | 70.00th=[19530], 80.00th=[19792], 90.00th=[20841], 95.00th=[25822], 00:11:04.824 | 99.00th=[37487], 99.50th=[38536], 99.90th=[39060], 99.95th=[39584], 00:11:04.824 | 99.99th=[39584] 00:11:04.824 bw ( KiB/s): min=10400, max=15288, per=24.72%, avg=12844.00, stdev=3456.34, samples=2 00:11:04.824 iops : min= 2600, max= 3822, avg=3211.00, stdev=864.08, samples=2 00:11:04.824 lat (msec) : 10=0.39%, 20=80.98%, 50=18.63% 00:11:04.824 cpu : usr=3.18%, sys=9.55%, ctx=377, majf=0, minf=3 00:11:04.824 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:04.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.824 issued rwts: total=3072,3338,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.824 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.824 job2: (groupid=0, jobs=1): err= 0: pid=71697: Thu Dec 5 09:42:52 2024 00:11:04.824 read: IOPS=2837, BW=11.1MiB/s (11.6MB/s)(11.2MiB/1006msec) 00:11:04.824 slat (usec): min=5, max=7049, avg=165.78, stdev=922.37 00:11:04.824 clat (usec): min=2521, max=30037, avg=21229.51, stdev=2899.42 00:11:04.824 lat (usec): min=7575, max=30076, avg=21395.30, stdev=3004.40 00:11:04.824 clat percentiles (usec): 00:11:04.824 | 1.00th=[ 8160], 5.00th=[17957], 10.00th=[18220], 20.00th=[19006], 00:11:04.824 | 30.00th=[19792], 40.00th=[21627], 50.00th=[22152], 60.00th=[22414], 00:11:04.824 | 70.00th=[22676], 80.00th=[22938], 90.00th=[23200], 95.00th=[25035], 00:11:04.824 | 99.00th=[28181], 99.50th=[29230], 99.90th=[29754], 99.95th=[30016], 00:11:04.824 | 99.99th=[30016] 00:11:04.824 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:11:04.824 slat (usec): min=10, max=7396, avg=163.67, stdev=758.29 00:11:04.824 clat (usec): min=12565, max=29130, avg=21475.41, stdev=2396.35 00:11:04.824 lat (usec): min=12590, max=30352, avg=21639.08, stdev=2464.38 00:11:04.824 clat percentiles (usec): 00:11:04.824 | 1.00th=[13960], 5.00th=[17171], 10.00th=[18482], 20.00th=[19530], 00:11:04.824 | 30.00th=[20841], 40.00th=[21627], 50.00th=[21890], 60.00th=[22152], 00:11:04.824 | 70.00th=[22676], 80.00th=[22938], 90.00th=[23462], 95.00th=[24511], 00:11:04.824 | 99.00th=[27919], 99.50th=[28443], 99.90th=[28705], 99.95th=[29230], 00:11:04.824 | 99.99th=[29230] 00:11:04.824 bw ( KiB/s): min=12288, max=12288, per=23.65%, avg=12288.00, stdev= 0.00, samples=2 00:11:04.824 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:11:04.824 lat (msec) : 4=0.02%, 10=0.71%, 20=25.58%, 50=73.70% 00:11:04.824 cpu : usr=2.89%, sys=9.65%, ctx=275, majf=0, minf=2 00:11:04.824 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:04.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.824 issued rwts: total=2855,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.824 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.824 job3: (groupid=0, jobs=1): err= 0: pid=71698: Thu Dec 5 09:42:52 2024 00:11:04.824 read: IOPS=2666, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1006msec) 00:11:04.824 slat (usec): min=5, max=9490, avg=167.04, stdev=832.10 00:11:04.824 clat (usec): min=4822, max=42320, avg=21496.09, stdev=4459.47 00:11:04.824 lat (usec): min=6615, max=42391, avg=21663.13, stdev=4426.60 00:11:04.824 clat percentiles (usec): 00:11:04.824 | 1.00th=[ 6980], 5.00th=[16581], 10.00th=[18220], 20.00th=[20055], 00:11:04.824 | 30.00th=[20317], 40.00th=[20579], 50.00th=[20841], 60.00th=[21103], 00:11:04.824 | 70.00th=[21365], 80.00th=[21627], 90.00th=[25035], 95.00th=[32375], 00:11:04.824 | 99.00th=[38536], 99.50th=[40633], 99.90th=[41157], 99.95th=[42206], 00:11:04.824 | 99.99th=[42206] 00:11:04.824 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:11:04.824 slat (usec): min=6, max=10416, avg=171.99, stdev=781.46 00:11:04.824 clat (usec): min=15420, max=44975, avg=22476.15, stdev=5892.31 00:11:04.824 lat (usec): min=15452, max=44997, avg=22648.14, stdev=5912.70 00:11:04.824 clat percentiles (usec): 00:11:04.824 | 1.00th=[16188], 5.00th=[16909], 10.00th=[17433], 20.00th=[17695], 00:11:04.824 | 30.00th=[18482], 40.00th=[20579], 50.00th=[21627], 60.00th=[21890], 00:11:04.824 | 70.00th=[22676], 80.00th=[23725], 90.00th=[30802], 95.00th=[35390], 00:11:04.824 | 99.00th=[42730], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:11:04.824 | 99.99th=[44827] 00:11:04.824 bw ( KiB/s): min=12248, max=12288, per=23.61%, avg=12268.00, stdev=28.28, samples=2 00:11:04.824 iops : min= 3062, max= 3072, avg=3067.00, stdev= 7.07, samples=2 00:11:04.824 lat (msec) : 10=0.56%, 20=27.06%, 50=72.38% 00:11:04.824 cpu : usr=2.69%, sys=9.15%, ctx=342, majf=0, minf=3 00:11:04.824 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:04.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.824 issued rwts: total=2682,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.824 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.824 00:11:04.824 Run status group 0 (all jobs): 00:11:04.824 READ: bw=45.6MiB/s (47.8MB/s), 10.4MiB/s-12.2MiB/s (10.9MB/s-12.8MB/s), io=45.8MiB (48.1MB), run=1002-1006msec 00:11:04.824 WRITE: bw=50.7MiB/s (53.2MB/s), 11.9MiB/s-14.0MiB/s (12.5MB/s-14.7MB/s), io=51.0MiB (53.5MB), run=1002-1006msec 00:11:04.824 00:11:04.824 Disk stats (read/write): 00:11:04.824 nvme0n1: ios=2652/3072, merge=0/0, ticks=16310/17599, in_queue=33909, util=89.18% 00:11:04.824 nvme0n2: ios=2928/3072, merge=0/0, ticks=13303/12205, in_queue=25508, util=89.81% 00:11:04.824 nvme0n3: ios=2432/2560, merge=0/0, ticks=16797/17271, in_queue=34068, util=89.45% 00:11:04.824 nvme0n4: ios=2560/2700, merge=0/0, ticks=12829/12434, in_queue=25263, util=89.40% 00:11:04.824 09:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:04.824 [global] 00:11:04.824 thread=1 00:11:04.824 invalidate=1 00:11:04.824 rw=randwrite 00:11:04.824 time_based=1 00:11:04.824 runtime=1 00:11:04.824 ioengine=libaio 00:11:04.824 direct=1 00:11:04.824 bs=4096 00:11:04.824 iodepth=128 00:11:04.824 norandommap=0 00:11:04.824 numjobs=1 00:11:04.824 00:11:04.824 verify_dump=1 00:11:04.824 verify_backlog=512 00:11:04.824 verify_state_save=0 00:11:04.824 do_verify=1 00:11:04.824 verify=crc32c-intel 00:11:04.824 [job0] 00:11:04.824 filename=/dev/nvme0n1 00:11:04.824 [job1] 00:11:04.824 filename=/dev/nvme0n2 00:11:04.824 [job2] 00:11:04.824 filename=/dev/nvme0n3 00:11:04.824 [job3] 00:11:04.824 filename=/dev/nvme0n4 00:11:04.824 Could not set queue depth (nvme0n1) 00:11:04.824 Could not set queue depth (nvme0n2) 00:11:04.824 Could not set queue depth (nvme0n3) 00:11:04.824 Could not set queue depth (nvme0n4) 00:11:04.824 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:04.825 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:04.825 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:04.825 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:04.825 fio-3.35 00:11:04.825 Starting 4 threads 00:11:06.203 00:11:06.204 job0: (groupid=0, jobs=1): err= 0: pid=71757: Thu Dec 5 09:42:53 2024 00:11:06.204 read: IOPS=2496, BW=9986KiB/s (10.2MB/s)(9.84MiB/1009msec) 00:11:06.204 slat (usec): min=4, max=18109, avg=215.63, stdev=1238.85 00:11:06.204 clat (usec): min=3008, max=54071, avg=26218.56, stdev=9009.85 00:11:06.204 lat (usec): min=7648, max=56132, avg=26434.19, stdev=9108.49 00:11:06.204 clat percentiles (usec): 00:11:06.204 | 1.00th=[ 8586], 5.00th=[15795], 10.00th=[15926], 20.00th=[19006], 00:11:06.204 | 30.00th=[19792], 40.00th=[20579], 50.00th=[23987], 60.00th=[29492], 00:11:06.204 | 70.00th=[32113], 80.00th=[35390], 90.00th=[39584], 95.00th=[42206], 00:11:06.204 | 99.00th=[45351], 99.50th=[45876], 99.90th=[49546], 99.95th=[50594], 00:11:06.204 | 99.99th=[54264] 00:11:06.204 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec); 0 zone resets 00:11:06.204 slat (usec): min=5, max=15430, avg=171.80, stdev=824.28 00:11:06.204 clat (usec): min=3136, max=47803, avg=24149.98, stdev=8982.27 00:11:06.204 lat (usec): min=3172, max=54273, avg=24321.78, stdev=9070.20 00:11:06.204 clat percentiles (usec): 00:11:06.204 | 1.00th=[ 7308], 5.00th=[11076], 10.00th=[14484], 20.00th=[17957], 00:11:06.204 | 30.00th=[18744], 40.00th=[20317], 50.00th=[20841], 60.00th=[21365], 00:11:06.204 | 70.00th=[27657], 80.00th=[35914], 90.00th=[36963], 95.00th=[38536], 00:11:06.204 | 99.00th=[43254], 99.50th=[44303], 99.90th=[46924], 99.95th=[46924], 00:11:06.204 | 99.99th=[47973] 00:11:06.204 bw ( KiB/s): min= 7568, max=12937, per=21.25%, avg=10252.50, stdev=3796.46, samples=2 00:11:06.204 iops : min= 1892, max= 3234, avg=2563.00, stdev=948.94, samples=2 00:11:06.204 lat (msec) : 4=0.14%, 10=2.72%, 20=32.41%, 50=64.70%, 100=0.04% 00:11:06.204 cpu : usr=3.08%, sys=6.55%, ctx=545, majf=0, minf=5 00:11:06.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:06.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:06.204 issued rwts: total=2519,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:06.204 job1: (groupid=0, jobs=1): err= 0: pid=71758: Thu Dec 5 09:42:53 2024 00:11:06.204 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:11:06.204 slat (usec): min=9, max=7782, avg=134.05, stdev=682.79 00:11:06.204 clat (usec): min=10083, max=24877, avg=16698.53, stdev=2241.91 00:11:06.204 lat (usec): min=10114, max=24908, avg=16832.58, stdev=2306.30 00:11:06.204 clat percentiles (usec): 00:11:06.204 | 1.00th=[11076], 5.00th=[12649], 10.00th=[14091], 20.00th=[15533], 00:11:06.204 | 30.00th=[15926], 40.00th=[16319], 50.00th=[16581], 60.00th=[16909], 00:11:06.204 | 70.00th=[17171], 80.00th=[17695], 90.00th=[19792], 95.00th=[21103], 00:11:06.204 | 99.00th=[22938], 99.50th=[23725], 99.90th=[23987], 99.95th=[24773], 00:11:06.204 | 99.99th=[24773] 00:11:06.204 write: IOPS=3860, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1009msec); 0 zone resets 00:11:06.204 slat (usec): min=11, max=8215, avg=124.90, stdev=498.42 00:11:06.204 clat (usec): min=7543, max=25764, avg=17354.88, stdev=2235.14 00:11:06.204 lat (usec): min=8520, max=25811, avg=17479.78, stdev=2279.64 00:11:06.204 clat percentiles (usec): 00:11:06.204 | 1.00th=[10290], 5.00th=[13042], 10.00th=[14877], 20.00th=[16450], 00:11:06.204 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:11:06.204 | 70.00th=[17957], 80.00th=[18482], 90.00th=[19268], 95.00th=[21365], 00:11:06.204 | 99.00th=[24249], 99.50th=[24511], 99.90th=[25297], 99.95th=[25560], 00:11:06.204 | 99.99th=[25822] 00:11:06.204 bw ( KiB/s): min=13896, max=16248, per=31.24%, avg=15072.00, stdev=1663.12, samples=2 00:11:06.204 iops : min= 3474, max= 4062, avg=3768.00, stdev=415.78, samples=2 00:11:06.204 lat (msec) : 10=0.41%, 20=91.58%, 50=8.01% 00:11:06.204 cpu : usr=4.76%, sys=10.52%, ctx=551, majf=0, minf=8 00:11:06.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:06.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:06.204 issued rwts: total=3584,3895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:06.204 job2: (groupid=0, jobs=1): err= 0: pid=71759: Thu Dec 5 09:42:53 2024 00:11:06.204 read: IOPS=3023, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1016msec) 00:11:06.204 slat (usec): min=5, max=20521, avg=174.56, stdev=1143.32 00:11:06.204 clat (usec): min=7131, max=43898, avg=21345.22, stdev=6004.55 00:11:06.204 lat (usec): min=7144, max=43913, avg=21519.78, stdev=6061.89 00:11:06.204 clat percentiles (usec): 00:11:06.204 | 1.00th=[ 8029], 5.00th=[13829], 10.00th=[14746], 20.00th=[17433], 00:11:06.204 | 30.00th=[17957], 40.00th=[18744], 50.00th=[20055], 60.00th=[21890], 00:11:06.204 | 70.00th=[22414], 80.00th=[25035], 90.00th=[30278], 95.00th=[32900], 00:11:06.204 | 99.00th=[40109], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:11:06.204 | 99.99th=[43779] 00:11:06.204 write: IOPS=3187, BW=12.4MiB/s (13.1MB/s)(12.6MiB/1016msec); 0 zone resets 00:11:06.204 slat (usec): min=5, max=19188, avg=134.60, stdev=692.89 00:11:06.204 clat (usec): min=5601, max=43847, avg=19491.72, stdev=4496.13 00:11:06.204 lat (usec): min=5624, max=43857, avg=19626.32, stdev=4560.92 00:11:06.204 clat percentiles (usec): 00:11:06.204 | 1.00th=[ 6783], 5.00th=[ 8979], 10.00th=[11600], 20.00th=[18220], 00:11:06.204 | 30.00th=[19530], 40.00th=[20055], 50.00th=[20579], 60.00th=[20841], 00:11:06.204 | 70.00th=[21627], 80.00th=[22938], 90.00th=[23462], 95.00th=[23725], 00:11:06.204 | 99.00th=[24773], 99.50th=[25560], 99.90th=[39584], 99.95th=[43779], 00:11:06.204 | 99.99th=[43779] 00:11:06.204 bw ( KiB/s): min=12312, max=12600, per=25.82%, avg=12456.00, stdev=203.65, samples=2 00:11:06.204 iops : min= 3078, max= 3150, avg=3114.00, stdev=50.91, samples=2 00:11:06.204 lat (msec) : 10=4.41%, 20=39.89%, 50=55.71% 00:11:06.204 cpu : usr=3.15%, sys=8.87%, ctx=425, majf=0, minf=5 00:11:06.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:06.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:06.204 issued rwts: total=3072,3238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:06.204 job3: (groupid=0, jobs=1): err= 0: pid=71760: Thu Dec 5 09:42:53 2024 00:11:06.204 read: IOPS=2334, BW=9340KiB/s (9564kB/s)(9424KiB/1009msec) 00:11:06.204 slat (usec): min=4, max=10539, avg=198.75, stdev=1003.45 00:11:06.204 clat (usec): min=8376, max=47071, avg=25089.27, stdev=6808.83 00:11:06.204 lat (usec): min=8389, max=47132, avg=25288.02, stdev=6866.52 00:11:06.204 clat percentiles (usec): 00:11:06.204 | 1.00th=[16057], 5.00th=[17171], 10.00th=[19006], 20.00th=[20317], 00:11:06.204 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21627], 60.00th=[22676], 00:11:06.204 | 70.00th=[30016], 80.00th=[32375], 90.00th=[35390], 95.00th=[37487], 00:11:06.204 | 99.00th=[43254], 99.50th=[44827], 99.90th=[46400], 99.95th=[46924], 00:11:06.204 | 99.99th=[46924] 00:11:06.204 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec); 0 zone resets 00:11:06.204 slat (usec): min=5, max=9301, avg=199.52, stdev=781.31 00:11:06.204 clat (usec): min=15979, max=49552, avg=26568.28, stdev=8327.17 00:11:06.204 lat (usec): min=16002, max=49611, avg=26767.81, stdev=8386.75 00:11:06.204 clat percentiles (usec): 00:11:06.204 | 1.00th=[16712], 5.00th=[17171], 10.00th=[17433], 20.00th=[18220], 00:11:06.204 | 30.00th=[20841], 40.00th=[21890], 50.00th=[22414], 60.00th=[24249], 00:11:06.204 | 70.00th=[34866], 80.00th=[36439], 90.00th=[37487], 95.00th=[39060], 00:11:06.204 | 99.00th=[42730], 99.50th=[44303], 99.90th=[49546], 99.95th=[49546], 00:11:06.204 | 99.99th=[49546] 00:11:06.204 bw ( KiB/s): min= 8192, max=12312, per=21.25%, avg=10252.00, stdev=2913.28, samples=2 00:11:06.204 iops : min= 2048, max= 3078, avg=2563.00, stdev=728.32, samples=2 00:11:06.204 lat (msec) : 10=0.26%, 20=20.95%, 50=78.78% 00:11:06.204 cpu : usr=2.98%, sys=6.94%, ctx=509, majf=0, minf=5 00:11:06.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:06.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:06.204 issued rwts: total=2356,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:06.204 00:11:06.204 Run status group 0 (all jobs): 00:11:06.204 READ: bw=44.3MiB/s (46.5MB/s), 9340KiB/s-13.9MiB/s (9564kB/s-14.5MB/s), io=45.0MiB (47.2MB), run=1009-1016msec 00:11:06.204 WRITE: bw=47.1MiB/s (49.4MB/s), 9.91MiB/s-15.1MiB/s (10.4MB/s-15.8MB/s), io=47.9MiB (50.2MB), run=1009-1016msec 00:11:06.204 00:11:06.204 Disk stats (read/write): 00:11:06.204 nvme0n1: ios=2098/2475, merge=0/0, ticks=39886/42632, in_queue=82518, util=89.18% 00:11:06.204 nvme0n2: ios=3121/3303, merge=0/0, ticks=24951/26055, in_queue=51006, util=90.28% 00:11:06.204 nvme0n3: ios=2560/2695, merge=0/0, ticks=53173/51267, in_queue=104440, util=89.15% 00:11:06.204 nvme0n4: ios=2048/2313, merge=0/0, ticks=16123/20136, in_queue=36259, util=89.40% 00:11:06.204 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:06.204 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=71774 00:11:06.204 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:06.204 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:06.204 [global] 00:11:06.204 thread=1 00:11:06.204 invalidate=1 00:11:06.204 rw=read 00:11:06.204 time_based=1 00:11:06.204 runtime=10 00:11:06.204 ioengine=libaio 00:11:06.204 direct=1 00:11:06.204 bs=4096 00:11:06.204 iodepth=1 00:11:06.204 norandommap=1 00:11:06.204 numjobs=1 00:11:06.204 00:11:06.204 [job0] 00:11:06.204 filename=/dev/nvme0n1 00:11:06.204 [job1] 00:11:06.204 filename=/dev/nvme0n2 00:11:06.204 [job2] 00:11:06.204 filename=/dev/nvme0n3 00:11:06.204 [job3] 00:11:06.204 filename=/dev/nvme0n4 00:11:06.204 Could not set queue depth (nvme0n1) 00:11:06.204 Could not set queue depth (nvme0n2) 00:11:06.204 Could not set queue depth (nvme0n3) 00:11:06.205 Could not set queue depth (nvme0n4) 00:11:06.205 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.205 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.205 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.205 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.205 fio-3.35 00:11:06.205 Starting 4 threads 00:11:09.488 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:09.488 fio: pid=71817, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:09.488 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=42438656, buflen=4096 00:11:09.488 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:09.488 fio: pid=71816, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:09.488 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=43094016, buflen=4096 00:11:09.488 09:42:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:09.488 09:42:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:09.770 fio: pid=71814, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:09.770 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=31453184, buflen=4096 00:11:10.027 09:42:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:10.028 09:42:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:10.286 fio: pid=71815, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:10.286 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=40292352, buflen=4096 00:11:10.286 00:11:10.286 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=71814: Thu Dec 5 09:42:57 2024 00:11:10.286 read: IOPS=2215, BW=8862KiB/s (9075kB/s)(30.0MiB/3466msec) 00:11:10.286 slat (usec): min=7, max=11546, avg=28.59, stdev=226.15 00:11:10.286 clat (usec): min=190, max=7490, avg=420.03, stdev=167.12 00:11:10.286 lat (usec): min=208, max=15603, avg=448.62, stdev=299.47 00:11:10.286 clat percentiles (usec): 00:11:10.286 | 1.00th=[ 212], 5.00th=[ 237], 10.00th=[ 285], 20.00th=[ 383], 00:11:10.286 | 30.00th=[ 396], 40.00th=[ 408], 50.00th=[ 416], 60.00th=[ 429], 00:11:10.286 | 70.00th=[ 445], 80.00th=[ 465], 90.00th=[ 490], 95.00th=[ 519], 00:11:10.286 | 99.00th=[ 668], 99.50th=[ 742], 99.90th=[ 3458], 99.95th=[ 3785], 00:11:10.286 | 99.99th=[ 7504] 00:11:10.286 bw ( KiB/s): min= 8392, max= 8760, per=21.53%, avg=8578.67, stdev=141.28, samples=6 00:11:10.286 iops : min= 2098, max= 2190, avg=2144.67, stdev=35.32, samples=6 00:11:10.286 lat (usec) : 250=6.78%, 500=85.25%, 750=7.47%, 1000=0.25% 00:11:10.286 lat (msec) : 2=0.08%, 4=0.12%, 10=0.04% 00:11:10.286 cpu : usr=1.21%, sys=4.39%, ctx=7692, majf=0, minf=1 00:11:10.286 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.286 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.286 issued rwts: total=7680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.286 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.286 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=71815: Thu Dec 5 09:42:57 2024 00:11:10.286 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(38.4MiB/3855msec) 00:11:10.286 slat (usec): min=9, max=11257, avg=26.87, stdev=220.89 00:11:10.286 clat (usec): min=162, max=3450, avg=362.75, stdev=128.23 00:11:10.286 lat (usec): min=172, max=11635, avg=389.61, stdev=255.74 00:11:10.286 clat percentiles (usec): 00:11:10.286 | 1.00th=[ 178], 5.00th=[ 194], 10.00th=[ 204], 20.00th=[ 223], 00:11:10.286 | 30.00th=[ 249], 40.00th=[ 379], 50.00th=[ 400], 60.00th=[ 416], 00:11:10.286 | 70.00th=[ 429], 80.00th=[ 453], 90.00th=[ 482], 95.00th=[ 506], 00:11:10.286 | 99.00th=[ 619], 99.50th=[ 693], 99.90th=[ 1270], 99.95th=[ 1582], 00:11:10.286 | 99.99th=[ 3458] 00:11:10.286 bw ( KiB/s): min= 8464, max=14959, per=24.08%, avg=9595.29, stdev=2368.34, samples=7 00:11:10.286 iops : min= 2116, max= 3739, avg=2398.71, stdev=591.80, samples=7 00:11:10.286 lat (usec) : 250=30.39%, 500=63.82%, 750=5.42%, 1000=0.21% 00:11:10.286 lat (msec) : 2=0.10%, 4=0.04% 00:11:10.286 cpu : usr=1.09%, sys=4.80%, ctx=9851, majf=0, minf=2 00:11:10.286 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.286 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.286 issued rwts: total=9838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.286 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.286 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=71816: Thu Dec 5 09:42:57 2024 00:11:10.286 read: IOPS=3266, BW=12.8MiB/s (13.4MB/s)(41.1MiB/3221msec) 00:11:10.286 slat (usec): min=9, max=7754, avg=17.50, stdev=103.91 00:11:10.286 clat (usec): min=212, max=1982, avg=287.02, stdev=49.34 00:11:10.286 lat (usec): min=223, max=8037, avg=304.52, stdev=114.92 00:11:10.286 clat percentiles (usec): 00:11:10.286 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 255], 00:11:10.286 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 289], 00:11:10.286 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 334], 95.00th=[ 363], 00:11:10.286 | 99.00th=[ 441], 99.50th=[ 482], 99.90th=[ 603], 99.95th=[ 717], 00:11:10.286 | 99.99th=[ 1795] 00:11:10.286 bw ( KiB/s): min=12728, max=13720, per=33.54%, avg=13362.67, stdev=357.01, samples=6 00:11:10.286 iops : min= 3182, max= 3430, avg=3340.67, stdev=89.25, samples=6 00:11:10.286 lat (usec) : 250=15.75%, 500=83.79%, 750=0.41%, 1000=0.01% 00:11:10.286 lat (msec) : 2=0.04% 00:11:10.286 cpu : usr=0.65%, sys=4.57%, ctx=10526, majf=0, minf=2 00:11:10.286 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.286 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.286 issued rwts: total=10522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.286 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.286 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=71817: Thu Dec 5 09:42:57 2024 00:11:10.286 read: IOPS=3505, BW=13.7MiB/s (14.4MB/s)(40.5MiB/2956msec) 00:11:10.286 slat (usec): min=10, max=121, avg=14.98, stdev= 4.80 00:11:10.286 clat (usec): min=196, max=3426, avg=268.45, stdev=52.68 00:11:10.286 lat (usec): min=208, max=3438, avg=283.43, stdev=53.19 00:11:10.286 clat percentiles (usec): 00:11:10.286 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 245], 00:11:10.286 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:11:10.286 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 322], 00:11:10.286 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 570], 99.95th=[ 832], 00:11:10.286 | 99.99th=[ 2073] 00:11:10.286 bw ( KiB/s): min=13584, max=14384, per=35.21%, avg=14027.20, stdev=322.97, samples=5 00:11:10.286 iops : min= 3396, max= 3596, avg=3506.80, stdev=80.74, samples=5 00:11:10.286 lat (usec) : 250=27.41%, 500=72.48%, 750=0.04%, 1000=0.03% 00:11:10.286 lat (msec) : 4=0.04% 00:11:10.286 cpu : usr=0.98%, sys=4.53%, ctx=10364, majf=0, minf=2 00:11:10.286 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.286 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.286 issued rwts: total=10362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.286 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.286 00:11:10.286 Run status group 0 (all jobs): 00:11:10.286 READ: bw=38.9MiB/s (40.8MB/s), 8862KiB/s-13.7MiB/s (9075kB/s-14.4MB/s), io=150MiB (157MB), run=2956-3855msec 00:11:10.286 00:11:10.286 Disk stats (read/write): 00:11:10.286 nvme0n1: ios=7453/0, merge=0/0, ticks=3185/0, in_queue=3185, util=95.20% 00:11:10.286 nvme0n2: ios=8825/0, merge=0/0, ticks=3379/0, in_queue=3379, util=95.45% 00:11:10.286 nvme0n3: ios=10281/0, merge=0/0, ticks=2988/0, in_queue=2988, util=96.37% 00:11:10.286 nvme0n4: ios=10076/0, merge=0/0, ticks=2724/0, in_queue=2724, util=96.73% 00:11:10.286 09:42:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:10.286 09:42:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:10.852 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:10.852 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:11.110 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:11.110 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:11.681 09:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:11.681 09:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:11.937 09:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:11.937 09:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:12.500 09:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:12.500 09:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 71774 00:11:12.500 09:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:12.500 09:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.500 09:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:12.500 09:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:12.500 09:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.500 09:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:12.500 09:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:12.500 09:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.500 nvmf hotplug test: fio failed as expected 00:11:12.500 09:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:12.500 09:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:12.500 09:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:12.500 09:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:12.757 rmmod nvme_tcp 00:11:12.757 rmmod nvme_fabrics 00:11:12.757 rmmod nvme_keyring 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 71278 ']' 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 71278 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 71278 ']' 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 71278 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71278 00:11:12.757 killing process with pid 71278 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71278' 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 71278 00:11:12.757 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 71278 00:11:13.688 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:13.688 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:13.688 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:13.688 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:13.688 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:13.688 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:13.688 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:13.688 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:13.688 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:13.688 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:13.946 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:13.946 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:13.946 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:13.946 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:13.946 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:13.946 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:13.946 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:13.946 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:13.946 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:13.946 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:13.946 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:13.946 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:13.946 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:13.946 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.946 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.946 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.946 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:11:13.946 ************************************ 00:11:13.946 END TEST nvmf_fio_target 00:11:13.946 ************************************ 00:11:13.946 00:11:13.946 real 0m22.314s 00:11:13.946 user 1m23.733s 00:11:13.946 sys 0m8.673s 00:11:13.946 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.946 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:14.206 ************************************ 00:11:14.206 START TEST nvmf_bdevio 00:11:14.206 ************************************ 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:14.206 * Looking for test storage... 00:11:14.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:14.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.206 --rc genhtml_branch_coverage=1 00:11:14.206 --rc genhtml_function_coverage=1 00:11:14.206 --rc genhtml_legend=1 00:11:14.206 --rc geninfo_all_blocks=1 00:11:14.206 --rc geninfo_unexecuted_blocks=1 00:11:14.206 00:11:14.206 ' 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:14.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.206 --rc genhtml_branch_coverage=1 00:11:14.206 --rc genhtml_function_coverage=1 00:11:14.206 --rc genhtml_legend=1 00:11:14.206 --rc geninfo_all_blocks=1 00:11:14.206 --rc geninfo_unexecuted_blocks=1 00:11:14.206 00:11:14.206 ' 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:14.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.206 --rc genhtml_branch_coverage=1 00:11:14.206 --rc genhtml_function_coverage=1 00:11:14.206 --rc genhtml_legend=1 00:11:14.206 --rc geninfo_all_blocks=1 00:11:14.206 --rc geninfo_unexecuted_blocks=1 00:11:14.206 00:11:14.206 ' 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:14.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.206 --rc genhtml_branch_coverage=1 00:11:14.206 --rc genhtml_function_coverage=1 00:11:14.206 --rc genhtml_legend=1 00:11:14.206 --rc geninfo_all_blocks=1 00:11:14.206 --rc geninfo_unexecuted_blocks=1 00:11:14.206 00:11:14.206 ' 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.206 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:14.207 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.207 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:14.466 Cannot find device "nvmf_init_br" 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:14.466 Cannot find device "nvmf_init_br2" 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:14.466 Cannot find device "nvmf_tgt_br" 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:14.466 Cannot find device "nvmf_tgt_br2" 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:14.466 Cannot find device "nvmf_init_br" 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:14.466 Cannot find device "nvmf_init_br2" 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:14.466 Cannot find device "nvmf_tgt_br" 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:14.466 Cannot find device "nvmf_tgt_br2" 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:11:14.466 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:14.466 Cannot find device "nvmf_br" 00:11:14.467 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:11:14.467 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:14.467 Cannot find device "nvmf_init_if" 00:11:14.467 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:11:14.467 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:14.467 Cannot find device "nvmf_init_if2" 00:11:14.467 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:11:14.467 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:14.467 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:14.467 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:11:14.467 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:14.467 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:14.467 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:11:14.467 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:14.467 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:14.467 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:14.467 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:14.467 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:14.467 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:14.467 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:14.467 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:14.467 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:14.467 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:14.467 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:14.467 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:14.467 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:14.467 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:14.467 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:14.467 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:14.467 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:14.467 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:14.467 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:14.467 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:14.467 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:14.726 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:14.726 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:14.726 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:14.726 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:14.726 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:14.727 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:14.727 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:11:14.727 00:11:14.727 --- 10.0.0.3 ping statistics --- 00:11:14.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.727 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:14.727 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:14.727 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:11:14.727 00:11:14.727 --- 10.0.0.4 ping statistics --- 00:11:14.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.727 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:14.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:14.727 00:11:14.727 --- 10.0.0.1 ping statistics --- 00:11:14.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.727 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:14.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:11:14.727 00:11:14.727 --- 10.0.0.2 ping statistics --- 00:11:14.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.727 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=72222 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 72222 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 72222 ']' 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.727 09:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.727 [2024-12-05 09:43:02.333924] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:11:14.727 [2024-12-05 09:43:02.334354] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.986 [2024-12-05 09:43:02.504986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.244 [2024-12-05 09:43:02.619786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.244 [2024-12-05 09:43:02.620272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.244 [2024-12-05 09:43:02.620725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.244 [2024-12-05 09:43:02.621164] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.244 [2024-12-05 09:43:02.621360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.244 [2024-12-05 09:43:02.623639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:15.244 [2024-12-05 09:43:02.623909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:15.244 [2024-12-05 09:43:02.623775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:15.244 [2024-12-05 09:43:02.624003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.810 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.810 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:15.810 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:15.810 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:15.810 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.810 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.810 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:15.810 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.810 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.810 [2024-12-05 09:43:03.400614] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.810 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.810 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:15.810 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.810 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:16.069 Malloc0 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:16.069 [2024-12-05 09:43:03.508752] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:16.069 { 00:11:16.069 "params": { 00:11:16.069 "name": "Nvme$subsystem", 00:11:16.069 "trtype": "$TEST_TRANSPORT", 00:11:16.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:16.069 "adrfam": "ipv4", 00:11:16.069 "trsvcid": "$NVMF_PORT", 00:11:16.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:16.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:16.069 "hdgst": ${hdgst:-false}, 00:11:16.069 "ddgst": ${ddgst:-false} 00:11:16.069 }, 00:11:16.069 "method": "bdev_nvme_attach_controller" 00:11:16.069 } 00:11:16.069 EOF 00:11:16.069 )") 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:16.069 09:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:16.069 "params": { 00:11:16.069 "name": "Nvme1", 00:11:16.069 "trtype": "tcp", 00:11:16.069 "traddr": "10.0.0.3", 00:11:16.069 "adrfam": "ipv4", 00:11:16.069 "trsvcid": "4420", 00:11:16.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:16.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:16.069 "hdgst": false, 00:11:16.069 "ddgst": false 00:11:16.069 }, 00:11:16.069 "method": "bdev_nvme_attach_controller" 00:11:16.069 }' 00:11:16.069 [2024-12-05 09:43:03.639866] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:11:16.069 [2024-12-05 09:43:03.640116] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72282 ] 00:11:16.328 [2024-12-05 09:43:03.833992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:16.586 [2024-12-05 09:43:03.996987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.586 [2024-12-05 09:43:03.997115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.586 [2024-12-05 09:43:03.997123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.844 I/O targets: 00:11:16.844 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:16.844 00:11:16.844 00:11:16.844 CUnit - A unit testing framework for C - Version 2.1-3 00:11:16.844 http://cunit.sourceforge.net/ 00:11:16.844 00:11:16.844 00:11:16.844 Suite: bdevio tests on: Nvme1n1 00:11:16.844 Test: blockdev write read block ...passed 00:11:17.104 Test: blockdev write zeroes read block ...passed 00:11:17.104 Test: blockdev write zeroes read no split ...passed 00:11:17.104 Test: blockdev write zeroes read split ...passed 00:11:17.104 Test: blockdev write zeroes read split partial ...passed 00:11:17.104 Test: blockdev reset ...[2024-12-05 09:43:04.570189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:17.104 [2024-12-05 09:43:04.570572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:11:17.104 [2024-12-05 09:43:04.586408] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:17.104 passed 00:11:17.104 Test: blockdev write read 8 blocks ...passed 00:11:17.104 Test: blockdev write read size > 128k ...passed 00:11:17.104 Test: blockdev write read invalid size ...passed 00:11:17.104 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:17.104 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:17.104 Test: blockdev write read max offset ...passed 00:11:17.104 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:17.104 Test: blockdev writev readv 8 blocks ...passed 00:11:17.104 Test: blockdev writev readv 30 x 1block ...passed 00:11:17.363 Test: blockdev writev readv block ...passed 00:11:17.363 Test: blockdev writev readv size > 128k ...passed 00:11:17.363 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:17.363 Test: blockdev comparev and writev ...[2024-12-05 09:43:04.766544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.363 [2024-12-05 09:43:04.767046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:17.363 [2024-12-05 09:43:04.767202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.363 [2024-12-05 09:43:04.767311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:17.363 [2024-12-05 09:43:04.767887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.363 [2024-12-05 09:43:04.768177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:17.363 [2024-12-05 09:43:04.768426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.363 [2024-12-05 09:43:04.768680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:17.363 [2024-12-05 09:43:04.769350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.363 [2024-12-05 09:43:04.769605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:17.363 [2024-12-05 09:43:04.769768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.363 [2024-12-05 09:43:04.769864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:17.363 [2024-12-05 09:43:04.770462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.363 [2024-12-05 09:43:04.770656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:17.363 [2024-12-05 09:43:04.770916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.363 [2024-12-05 09:43:04.771181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:17.363 passed 00:11:17.363 Test: blockdev nvme passthru rw ...passed 00:11:17.363 Test: blockdev nvme passthru vendor specific ...[2024-12-05 09:43:04.855439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:17.363 [2024-12-05 09:43:04.855741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:17.363 [2024-12-05 09:43:04.856062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:17.363 [2024-12-05 09:43:04.856296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:17.363 [2024-12-05 09:43:04.856684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:17.363 [2024-12-05 09:43:04.856907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:17.363 [2024-12-05 09:43:04.857304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:17.363 [2024-12-05 09:43:04.857563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 spassed 00:11:17.363 Test: blockdev nvme admin passthru ...qhd:002f p:0 m:0 dnr:0 00:11:17.363 passed 00:11:17.363 Test: blockdev copy ...passed 00:11:17.363 00:11:17.363 Run Summary: Type Total Ran Passed Failed Inactive 00:11:17.363 suites 1 1 n/a 0 0 00:11:17.363 tests 23 23 23 0 0 00:11:17.363 asserts 152 152 152 0 n/a 00:11:17.363 00:11:17.363 Elapsed time = 1.037 seconds 00:11:18.298 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.298 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.298 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.298 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.298 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:18.298 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:18.298 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:18.298 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:18.298 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:18.298 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:18.298 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:18.298 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:18.298 rmmod nvme_tcp 00:11:18.298 rmmod nvme_fabrics 00:11:18.298 rmmod nvme_keyring 00:11:18.556 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:18.556 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:18.556 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:18.556 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 72222 ']' 00:11:18.556 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 72222 00:11:18.556 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 72222 ']' 00:11:18.556 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 72222 00:11:18.556 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:18.556 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.556 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72222 00:11:18.556 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:18.556 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:18.556 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72222' 00:11:18.556 killing process with pid 72222 00:11:18.556 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 72222 00:11:18.556 09:43:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 72222 00:11:19.492 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:19.492 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:19.492 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:19.492 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:19.492 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:19.492 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:19.492 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:19.492 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:19.492 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:19.492 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:19.492 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:19.492 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:19.492 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:19.492 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:19.492 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:19.492 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:19.492 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:19.492 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:19.492 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:19.751 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:19.751 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:19.751 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:19.751 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:19.751 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.751 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.751 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.751 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:11:19.751 00:11:19.751 real 0m5.622s 00:11:19.751 user 0m21.687s 00:11:19.751 sys 0m1.255s 00:11:19.751 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.751 ************************************ 00:11:19.751 END TEST nvmf_bdevio 00:11:19.751 ************************************ 00:11:19.751 09:43:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:19.751 09:43:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:19.751 00:11:19.751 real 4m0.580s 00:11:19.751 user 12m25.312s 00:11:19.751 sys 1m5.052s 00:11:19.751 09:43:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.751 09:43:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:19.751 ************************************ 00:11:19.751 END TEST nvmf_target_core 00:11:19.751 ************************************ 00:11:19.751 09:43:07 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:19.751 09:43:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.751 09:43:07 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.751 09:43:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:19.751 ************************************ 00:11:19.751 START TEST nvmf_target_extra 00:11:19.751 ************************************ 00:11:19.751 09:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:20.027 * Looking for test storage... 00:11:20.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:20.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.027 --rc genhtml_branch_coverage=1 00:11:20.027 --rc genhtml_function_coverage=1 00:11:20.027 --rc genhtml_legend=1 00:11:20.027 --rc geninfo_all_blocks=1 00:11:20.027 --rc geninfo_unexecuted_blocks=1 00:11:20.027 00:11:20.027 ' 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:20.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.027 --rc genhtml_branch_coverage=1 00:11:20.027 --rc genhtml_function_coverage=1 00:11:20.027 --rc genhtml_legend=1 00:11:20.027 --rc geninfo_all_blocks=1 00:11:20.027 --rc geninfo_unexecuted_blocks=1 00:11:20.027 00:11:20.027 ' 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:20.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.027 --rc genhtml_branch_coverage=1 00:11:20.027 --rc genhtml_function_coverage=1 00:11:20.027 --rc genhtml_legend=1 00:11:20.027 --rc geninfo_all_blocks=1 00:11:20.027 --rc geninfo_unexecuted_blocks=1 00:11:20.027 00:11:20.027 ' 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:20.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.027 --rc genhtml_branch_coverage=1 00:11:20.027 --rc genhtml_function_coverage=1 00:11:20.027 --rc genhtml_legend=1 00:11:20.027 --rc geninfo_all_blocks=1 00:11:20.027 --rc geninfo_unexecuted_blocks=1 00:11:20.027 00:11:20.027 ' 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.027 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.028 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:20.028 ************************************ 00:11:20.028 START TEST nvmf_example 00:11:20.028 ************************************ 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:20.028 * Looking for test storage... 00:11:20.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:20.028 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:20.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.288 --rc genhtml_branch_coverage=1 00:11:20.288 --rc genhtml_function_coverage=1 00:11:20.288 --rc genhtml_legend=1 00:11:20.288 --rc geninfo_all_blocks=1 00:11:20.288 --rc geninfo_unexecuted_blocks=1 00:11:20.288 00:11:20.288 ' 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:20.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.288 --rc genhtml_branch_coverage=1 00:11:20.288 --rc genhtml_function_coverage=1 00:11:20.288 --rc genhtml_legend=1 00:11:20.288 --rc geninfo_all_blocks=1 00:11:20.288 --rc geninfo_unexecuted_blocks=1 00:11:20.288 00:11:20.288 ' 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:20.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.288 --rc genhtml_branch_coverage=1 00:11:20.288 --rc genhtml_function_coverage=1 00:11:20.288 --rc genhtml_legend=1 00:11:20.288 --rc geninfo_all_blocks=1 00:11:20.288 --rc geninfo_unexecuted_blocks=1 00:11:20.288 00:11:20.288 ' 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:20.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.288 --rc genhtml_branch_coverage=1 00:11:20.288 --rc genhtml_function_coverage=1 00:11:20.288 --rc genhtml_legend=1 00:11:20.288 --rc geninfo_all_blocks=1 00:11:20.288 --rc geninfo_unexecuted_blocks=1 00:11:20.288 00:11:20.288 ' 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.288 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.289 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:20.289 Cannot find device "nvmf_init_br" 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:20.289 Cannot find device "nvmf_init_br2" 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:20.289 Cannot find device "nvmf_tgt_br" 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:20.289 Cannot find device "nvmf_tgt_br2" 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:11:20.289 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:20.290 Cannot find device "nvmf_init_br" 00:11:20.290 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:11:20.290 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:20.290 Cannot find device "nvmf_init_br2" 00:11:20.290 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:11:20.290 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:20.290 Cannot find device "nvmf_tgt_br" 00:11:20.290 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:11:20.290 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:20.290 Cannot find device "nvmf_tgt_br2" 00:11:20.290 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:11:20.290 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:20.290 Cannot find device "nvmf_br" 00:11:20.290 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:11:20.290 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:20.290 Cannot find device "nvmf_init_if" 00:11:20.290 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:11:20.290 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:20.290 Cannot find device "nvmf_init_if2" 00:11:20.290 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:11:20.290 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:20.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:20.549 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:11:20.549 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:20.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:20.549 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:11:20.549 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:20.549 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:20.549 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:20.549 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:20.549 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:20.549 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:20.549 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:20.549 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:20.549 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:20.549 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:20.549 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:20.549 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:20.549 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:20.549 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:20.549 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:20.549 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:20.549 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:20.549 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:20.549 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:20.549 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:20.549 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:20.549 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:20.549 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:20.550 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:20.550 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:11:20.550 00:11:20.550 --- 10.0.0.3 ping statistics --- 00:11:20.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.550 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:20.550 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:20.550 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:11:20.550 00:11:20.550 --- 10.0.0.4 ping statistics --- 00:11:20.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.550 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:20.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:20.550 00:11:20.550 --- 10.0.0.1 ping statistics --- 00:11:20.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.550 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:20.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:11:20.550 00:11:20.550 --- 10.0.0.2 ping statistics --- 00:11:20.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.550 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@461 -- # return 0 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:20.550 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:20.809 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:20.809 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:20.809 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:20.809 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.809 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:20.809 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:20.809 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=72629 00:11:20.809 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:20.809 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:20.809 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 72629 00:11:20.809 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 72629 ']' 00:11:20.809 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.809 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.809 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.809 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.809 09:43:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.744 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.744 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:21.744 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:21.744 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:21.744 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.744 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.744 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.744 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:11:22.002 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:34.207 Initializing NVMe Controllers 00:11:34.207 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:34.207 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:34.207 Initialization complete. Launching workers. 00:11:34.207 ======================================================== 00:11:34.207 Latency(us) 00:11:34.207 Device Information : IOPS MiB/s Average min max 00:11:34.207 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14020.45 54.77 4564.35 823.18 25108.52 00:11:34.207 ======================================================== 00:11:34.207 Total : 14020.45 54.77 4564.35 823.18 25108.52 00:11:34.207 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:34.207 rmmod nvme_tcp 00:11:34.207 rmmod nvme_fabrics 00:11:34.207 rmmod nvme_keyring 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 72629 ']' 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 72629 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 72629 ']' 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 72629 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72629 00:11:34.207 killing process with pid 72629 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72629' 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 72629 00:11:34.207 09:43:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 72629 00:11:34.207 nvmf threads initialize successfully 00:11:34.207 bdev subsystem init successfully 00:11:34.207 created a nvmf target service 00:11:34.207 create targets's poll groups done 00:11:34.207 all subsystems of target started 00:11:34.207 nvmf target is running 00:11:34.207 all subsystems of target stopped 00:11:34.207 destroy targets's poll groups done 00:11:34.207 destroyed the nvmf target service 00:11:34.207 bdev subsystem finish successfully 00:11:34.207 nvmf threads destroy successfully 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.207 00:11:34.207 real 0m13.761s 00:11:34.207 user 0m47.624s 00:11:34.207 sys 0m2.341s 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.207 ************************************ 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.207 END TEST nvmf_example 00:11:34.207 ************************************ 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:34.207 ************************************ 00:11:34.207 START TEST nvmf_filesystem 00:11:34.207 ************************************ 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:34.207 * Looking for test storage... 00:11:34.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.207 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:34.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.208 --rc genhtml_branch_coverage=1 00:11:34.208 --rc genhtml_function_coverage=1 00:11:34.208 --rc genhtml_legend=1 00:11:34.208 --rc geninfo_all_blocks=1 00:11:34.208 --rc geninfo_unexecuted_blocks=1 00:11:34.208 00:11:34.208 ' 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:34.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.208 --rc genhtml_branch_coverage=1 00:11:34.208 --rc genhtml_function_coverage=1 00:11:34.208 --rc genhtml_legend=1 00:11:34.208 --rc geninfo_all_blocks=1 00:11:34.208 --rc geninfo_unexecuted_blocks=1 00:11:34.208 00:11:34.208 ' 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:34.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.208 --rc genhtml_branch_coverage=1 00:11:34.208 --rc genhtml_function_coverage=1 00:11:34.208 --rc genhtml_legend=1 00:11:34.208 --rc geninfo_all_blocks=1 00:11:34.208 --rc geninfo_unexecuted_blocks=1 00:11:34.208 00:11:34.208 ' 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:34.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.208 --rc genhtml_branch_coverage=1 00:11:34.208 --rc genhtml_function_coverage=1 00:11:34.208 --rc genhtml_legend=1 00:11:34.208 --rc geninfo_all_blocks=1 00:11:34.208 --rc geninfo_unexecuted_blocks=1 00:11:34.208 00:11:34.208 ' 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:34.208 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:34.209 #define SPDK_CONFIG_H 00:11:34.209 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:34.209 #define SPDK_CONFIG_APPS 1 00:11:34.209 #define SPDK_CONFIG_ARCH native 00:11:34.209 #define SPDK_CONFIG_ASAN 1 00:11:34.209 #define SPDK_CONFIG_AVAHI 1 00:11:34.209 #undef SPDK_CONFIG_CET 00:11:34.209 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:34.209 #define SPDK_CONFIG_COVERAGE 1 00:11:34.209 #define SPDK_CONFIG_CROSS_PREFIX 00:11:34.209 #undef SPDK_CONFIG_CRYPTO 00:11:34.209 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:34.209 #undef SPDK_CONFIG_CUSTOMOCF 00:11:34.209 #undef SPDK_CONFIG_DAOS 00:11:34.209 #define SPDK_CONFIG_DAOS_DIR 00:11:34.209 #define SPDK_CONFIG_DEBUG 1 00:11:34.209 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:34.209 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:34.209 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:34.209 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:34.209 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:34.209 #undef SPDK_CONFIG_DPDK_UADK 00:11:34.209 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:34.209 #define SPDK_CONFIG_EXAMPLES 1 00:11:34.209 #undef SPDK_CONFIG_FC 00:11:34.209 #define SPDK_CONFIG_FC_PATH 00:11:34.209 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:34.209 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:34.209 #define SPDK_CONFIG_FSDEV 1 00:11:34.209 #undef SPDK_CONFIG_FUSE 00:11:34.209 #undef SPDK_CONFIG_FUZZER 00:11:34.209 #define SPDK_CONFIG_FUZZER_LIB 00:11:34.209 #define SPDK_CONFIG_GOLANG 1 00:11:34.209 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:34.209 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:34.209 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:34.209 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:34.209 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:34.209 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:34.209 #undef SPDK_CONFIG_HAVE_LZ4 00:11:34.209 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:34.209 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:34.209 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:34.209 #define SPDK_CONFIG_IDXD 1 00:11:34.209 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:34.209 #undef SPDK_CONFIG_IPSEC_MB 00:11:34.209 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:34.209 #define SPDK_CONFIG_ISAL 1 00:11:34.209 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:34.209 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:34.209 #define SPDK_CONFIG_LIBDIR 00:11:34.209 #undef SPDK_CONFIG_LTO 00:11:34.209 #define SPDK_CONFIG_MAX_LCORES 128 00:11:34.209 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:34.209 #define SPDK_CONFIG_NVME_CUSE 1 00:11:34.209 #undef SPDK_CONFIG_OCF 00:11:34.209 #define SPDK_CONFIG_OCF_PATH 00:11:34.209 #define SPDK_CONFIG_OPENSSL_PATH 00:11:34.209 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:34.209 #define SPDK_CONFIG_PGO_DIR 00:11:34.209 #undef SPDK_CONFIG_PGO_USE 00:11:34.209 #define SPDK_CONFIG_PREFIX /usr/local 00:11:34.209 #undef SPDK_CONFIG_RAID5F 00:11:34.209 #undef SPDK_CONFIG_RBD 00:11:34.209 #define SPDK_CONFIG_RDMA 1 00:11:34.209 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:34.209 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:34.209 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:34.209 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:34.209 #define SPDK_CONFIG_SHARED 1 00:11:34.209 #undef SPDK_CONFIG_SMA 00:11:34.209 #define SPDK_CONFIG_TESTS 1 00:11:34.209 #undef SPDK_CONFIG_TSAN 00:11:34.209 #define SPDK_CONFIG_UBLK 1 00:11:34.209 #define SPDK_CONFIG_UBSAN 1 00:11:34.209 #undef SPDK_CONFIG_UNIT_TESTS 00:11:34.209 #undef SPDK_CONFIG_URING 00:11:34.209 #define SPDK_CONFIG_URING_PATH 00:11:34.209 #undef SPDK_CONFIG_URING_ZNS 00:11:34.209 #define SPDK_CONFIG_USDT 1 00:11:34.209 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:34.209 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:34.209 #define SPDK_CONFIG_VFIO_USER 1 00:11:34.209 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:34.209 #define SPDK_CONFIG_VHOST 1 00:11:34.209 #define SPDK_CONFIG_VIRTIO 1 00:11:34.209 #undef SPDK_CONFIG_VTUNE 00:11:34.209 #define SPDK_CONFIG_VTUNE_DIR 00:11:34.209 #define SPDK_CONFIG_WERROR 1 00:11:34.209 #define SPDK_CONFIG_WPDK_DIR 00:11:34.209 #undef SPDK_CONFIG_XNVME 00:11:34.209 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.209 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:34.210 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:34.211 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 72918 ]] 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 72918 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.P6QJ72 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.P6QJ72/tests/target /tmp/spdk.P6QJ72 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13921312768 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5648080896 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6255058944 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=2486431744 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=20140032 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13921312768 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5648080896 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6266286080 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=139264 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:11:34.212 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=98684198912 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1018580992 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:34.213 * Looking for test storage... 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/home 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=13921312768 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:34.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:34.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.213 --rc genhtml_branch_coverage=1 00:11:34.213 --rc genhtml_function_coverage=1 00:11:34.213 --rc genhtml_legend=1 00:11:34.213 --rc geninfo_all_blocks=1 00:11:34.213 --rc geninfo_unexecuted_blocks=1 00:11:34.213 00:11:34.213 ' 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:34.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.213 --rc genhtml_branch_coverage=1 00:11:34.213 --rc genhtml_function_coverage=1 00:11:34.213 --rc genhtml_legend=1 00:11:34.213 --rc geninfo_all_blocks=1 00:11:34.213 --rc geninfo_unexecuted_blocks=1 00:11:34.213 00:11:34.213 ' 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:34.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.213 --rc genhtml_branch_coverage=1 00:11:34.213 --rc genhtml_function_coverage=1 00:11:34.213 --rc genhtml_legend=1 00:11:34.213 --rc geninfo_all_blocks=1 00:11:34.213 --rc geninfo_unexecuted_blocks=1 00:11:34.213 00:11:34.213 ' 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:34.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.213 --rc genhtml_branch_coverage=1 00:11:34.213 --rc genhtml_function_coverage=1 00:11:34.213 --rc genhtml_legend=1 00:11:34.213 --rc geninfo_all_blocks=1 00:11:34.213 --rc geninfo_unexecuted_blocks=1 00:11:34.213 00:11:34.213 ' 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:34.213 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.214 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.214 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:34.474 Cannot find device "nvmf_init_br" 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:34.474 Cannot find device "nvmf_init_br2" 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:34.474 Cannot find device "nvmf_tgt_br" 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:34.474 Cannot find device "nvmf_tgt_br2" 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:34.474 Cannot find device "nvmf_init_br" 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:34.474 Cannot find device "nvmf_init_br2" 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:34.474 Cannot find device "nvmf_tgt_br" 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:34.474 Cannot find device "nvmf_tgt_br2" 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:34.474 Cannot find device "nvmf_br" 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:34.474 Cannot find device "nvmf_init_if" 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:34.474 Cannot find device "nvmf_init_if2" 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:34.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:34.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:34.474 09:43:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:34.474 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:34.474 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:34.474 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:34.474 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:34.474 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:34.474 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:34.475 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:34.475 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:34.475 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:34.475 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:34.475 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:34.475 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:34.475 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:34.475 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:34.734 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:34.734 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:11:34.734 00:11:34.734 --- 10.0.0.3 ping statistics --- 00:11:34.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.734 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:34.734 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:34.734 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:11:34.734 00:11:34.734 --- 10.0.0.4 ping statistics --- 00:11:34.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.734 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:34.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:11:34.734 00:11:34.734 --- 10.0.0.1 ping statistics --- 00:11:34.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.734 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:34.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:11:34.734 00:11:34.734 --- 10.0.0.2 ping statistics --- 00:11:34.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.734 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@461 -- # return 0 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:34.734 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.735 ************************************ 00:11:34.735 START TEST nvmf_filesystem_no_in_capsule 00:11:34.735 ************************************ 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=73104 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 73104 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 73104 ']' 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.735 09:43:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.994 [2024-12-05 09:43:22.413793] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:11:34.994 [2024-12-05 09:43:22.413993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.994 [2024-12-05 09:43:22.610094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.253 [2024-12-05 09:43:22.763953] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.253 [2024-12-05 09:43:22.764040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.253 [2024-12-05 09:43:22.764068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.253 [2024-12-05 09:43:22.764084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.253 [2024-12-05 09:43:22.764105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.253 [2024-12-05 09:43:22.766811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.253 [2024-12-05 09:43:22.767001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.253 [2024-12-05 09:43:22.767834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.253 [2024-12-05 09:43:22.767870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.854 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.854 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:35.854 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:35.854 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:35.854 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.854 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.854 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:35.854 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:35.854 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.854 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.854 [2024-12-05 09:43:23.450674] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.854 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.854 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:35.854 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.854 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.430 Malloc1 00:11:36.430 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.430 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:36.430 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.430 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.430 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.430 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:36.430 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.430 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.430 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.430 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:36.430 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.430 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.430 [2024-12-05 09:43:24.025780] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:36.430 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.430 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:36.430 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:36.430 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:36.430 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:36.430 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:36.430 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:36.430 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.430 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.689 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.689 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:36.689 { 00:11:36.689 "aliases": [ 00:11:36.689 "ab6d9394-79ca-44d0-826e-059764af87b8" 00:11:36.689 ], 00:11:36.689 "assigned_rate_limits": { 00:11:36.689 "r_mbytes_per_sec": 0, 00:11:36.689 "rw_ios_per_sec": 0, 00:11:36.689 "rw_mbytes_per_sec": 0, 00:11:36.689 "w_mbytes_per_sec": 0 00:11:36.689 }, 00:11:36.689 "block_size": 512, 00:11:36.689 "claim_type": "exclusive_write", 00:11:36.689 "claimed": true, 00:11:36.689 "driver_specific": {}, 00:11:36.689 "memory_domains": [ 00:11:36.689 { 00:11:36.689 "dma_device_id": "system", 00:11:36.689 "dma_device_type": 1 00:11:36.689 }, 00:11:36.689 { 00:11:36.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.689 "dma_device_type": 2 00:11:36.689 } 00:11:36.689 ], 00:11:36.689 "name": "Malloc1", 00:11:36.689 "num_blocks": 1048576, 00:11:36.689 "product_name": "Malloc disk", 00:11:36.689 "supported_io_types": { 00:11:36.689 "abort": true, 00:11:36.689 "compare": false, 00:11:36.689 "compare_and_write": false, 00:11:36.689 "copy": true, 00:11:36.689 "flush": true, 00:11:36.689 "get_zone_info": false, 00:11:36.689 "nvme_admin": false, 00:11:36.689 "nvme_io": false, 00:11:36.689 "nvme_io_md": false, 00:11:36.689 "nvme_iov_md": false, 00:11:36.689 "read": true, 00:11:36.689 "reset": true, 00:11:36.689 "seek_data": false, 00:11:36.689 "seek_hole": false, 00:11:36.689 "unmap": true, 00:11:36.689 "write": true, 00:11:36.689 "write_zeroes": true, 00:11:36.689 "zcopy": true, 00:11:36.689 "zone_append": false, 00:11:36.689 "zone_management": false 00:11:36.689 }, 00:11:36.689 "uuid": "ab6d9394-79ca-44d0-826e-059764af87b8", 00:11:36.689 "zoned": false 00:11:36.689 } 00:11:36.689 ]' 00:11:36.689 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:36.689 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:36.689 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:36.689 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:36.689 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:36.689 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:36.689 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:36.689 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:36.948 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:36.948 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:36.948 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:36.948 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:36.948 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:38.850 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:38.850 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:38.850 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:38.850 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:38.850 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:38.850 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:38.850 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:38.850 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:38.850 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:38.850 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:38.850 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:38.850 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:38.850 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:38.850 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:38.850 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:38.850 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:38.850 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:38.850 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:39.107 09:43:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:40.041 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:40.041 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:40.041 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:40.041 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.042 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.042 ************************************ 00:11:40.042 START TEST filesystem_ext4 00:11:40.042 ************************************ 00:11:40.042 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:40.042 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:40.042 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:40.042 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:40.042 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:40.042 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:40.042 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:40.042 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:40.042 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:40.042 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:40.042 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:40.042 mke2fs 1.47.0 (5-Feb-2023) 00:11:40.301 Discarding device blocks: 0/522240 done 00:11:40.301 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:40.301 Filesystem UUID: 26ba2c52-ea0f-480e-9d19-fb6330883b2b 00:11:40.301 Superblock backups stored on blocks: 00:11:40.301 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:40.301 00:11:40.301 Allocating group tables: 0/64 done 00:11:40.301 Writing inode tables: 0/64 done 00:11:40.301 Creating journal (8192 blocks): done 00:11:40.301 Writing superblocks and filesystem accounting information: 0/64 done 00:11:40.301 00:11:40.301 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:40.301 09:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:45.568 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:45.568 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 73104 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:45.827 ************************************ 00:11:45.827 END TEST filesystem_ext4 00:11:45.827 ************************************ 00:11:45.827 00:11:45.827 real 0m5.697s 00:11:45.827 user 0m0.024s 00:11:45.827 sys 0m0.067s 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.827 ************************************ 00:11:45.827 START TEST filesystem_btrfs 00:11:45.827 ************************************ 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:45.827 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:45.828 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:45.828 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:45.828 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:45.828 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:45.828 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:45.828 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:46.087 btrfs-progs v6.8.1 00:11:46.087 See https://btrfs.readthedocs.io for more information. 00:11:46.087 00:11:46.087 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:46.087 NOTE: several default settings have changed in version 5.15, please make sure 00:11:46.087 this does not affect your deployments: 00:11:46.087 - DUP for metadata (-m dup) 00:11:46.087 - enabled no-holes (-O no-holes) 00:11:46.087 - enabled free-space-tree (-R free-space-tree) 00:11:46.087 00:11:46.087 Label: (null) 00:11:46.087 UUID: 707108eb-7e16-4d70-81ed-9f3ef1018e51 00:11:46.087 Node size: 16384 00:11:46.087 Sector size: 4096 (CPU page size: 4096) 00:11:46.087 Filesystem size: 510.00MiB 00:11:46.087 Block group profiles: 00:11:46.087 Data: single 8.00MiB 00:11:46.087 Metadata: DUP 32.00MiB 00:11:46.087 System: DUP 8.00MiB 00:11:46.087 SSD detected: yes 00:11:46.087 Zoned device: no 00:11:46.087 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:46.087 Checksum: crc32c 00:11:46.087 Number of devices: 1 00:11:46.087 Devices: 00:11:46.087 ID SIZE PATH 00:11:46.087 1 510.00MiB /dev/nvme0n1p1 00:11:46.087 00:11:46.087 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:46.087 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:46.087 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:46.087 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:46.087 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:46.087 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:46.087 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:46.087 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:46.087 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 73104 00:11:46.087 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:46.087 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:46.087 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:46.087 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:46.087 ************************************ 00:11:46.087 END TEST filesystem_btrfs 00:11:46.087 ************************************ 00:11:46.087 00:11:46.087 real 0m0.292s 00:11:46.087 user 0m0.026s 00:11:46.087 sys 0m0.056s 00:11:46.087 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.087 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:46.087 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:46.087 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:46.087 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.087 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.087 ************************************ 00:11:46.087 START TEST filesystem_xfs 00:11:46.087 ************************************ 00:11:46.088 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:46.088 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:46.088 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:46.088 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:46.088 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:46.088 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:46.088 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:46.088 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:46.088 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:46.088 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:46.088 09:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:46.347 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:46.347 = sectsz=512 attr=2, projid32bit=1 00:11:46.347 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:46.347 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:46.347 data = bsize=4096 blocks=130560, imaxpct=25 00:11:46.347 = sunit=0 swidth=0 blks 00:11:46.347 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:46.347 log =internal log bsize=4096 blocks=16384, version=2 00:11:46.347 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:46.347 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:46.914 Discarding blocks...Done. 00:11:46.914 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:46.914 09:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:49.540 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.540 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:49.540 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.540 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:49.540 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:49.540 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.540 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 73104 00:11:49.540 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.540 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.540 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.540 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.541 ************************************ 00:11:49.541 END TEST filesystem_xfs 00:11:49.541 ************************************ 00:11:49.541 00:11:49.541 real 0m3.214s 00:11:49.541 user 0m0.025s 00:11:49.541 sys 0m0.057s 00:11:49.541 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.541 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:49.541 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:49.541 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:49.541 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.541 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.541 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:49.541 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:49.541 09:43:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.541 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:49.541 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.541 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:49.541 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.541 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.541 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.541 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.541 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:49.541 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 73104 00:11:49.541 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 73104 ']' 00:11:49.541 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 73104 00:11:49.541 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:49.541 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.541 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73104 00:11:49.541 killing process with pid 73104 00:11:49.541 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.541 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.541 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73104' 00:11:49.541 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 73104 00:11:49.541 09:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 73104 00:11:52.076 ************************************ 00:11:52.076 END TEST nvmf_filesystem_no_in_capsule 00:11:52.076 ************************************ 00:11:52.076 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:52.076 00:11:52.076 real 0m16.981s 00:11:52.076 user 1m4.348s 00:11:52.076 sys 0m1.893s 00:11:52.076 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.076 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.076 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:52.076 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:52.076 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.076 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:52.076 ************************************ 00:11:52.076 START TEST nvmf_filesystem_in_capsule 00:11:52.076 ************************************ 00:11:52.076 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:52.076 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:52.076 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:52.076 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:52.077 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:52.077 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.077 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=73506 00:11:52.077 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 73506 00:11:52.077 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.077 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 73506 ']' 00:11:52.077 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.077 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.077 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.077 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.077 09:43:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.077 [2024-12-05 09:43:39.419311] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:11:52.077 [2024-12-05 09:43:39.419801] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.077 [2024-12-05 09:43:39.587999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.077 [2024-12-05 09:43:39.704034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.336 [2024-12-05 09:43:39.704405] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.336 [2024-12-05 09:43:39.704569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.336 [2024-12-05 09:43:39.704853] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.336 [2024-12-05 09:43:39.704908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.336 [2024-12-05 09:43:39.707182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.336 [2024-12-05 09:43:39.707276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.336 [2024-12-05 09:43:39.707436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.336 [2024-12-05 09:43:39.707782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.905 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.905 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:52.905 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:52.905 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:52.905 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.905 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.905 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:52.905 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:52.905 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.905 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.905 [2024-12-05 09:43:40.523075] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.163 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.163 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:53.163 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.163 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.421 Malloc1 00:11:53.421 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.421 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:53.421 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.421 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.421 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.421 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.421 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.421 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.421 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.421 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:53.421 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.421 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.421 [2024-12-05 09:43:41.022720] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:53.421 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.421 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:53.421 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:53.421 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:53.421 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:53.421 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:53.421 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:53.421 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.421 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.678 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.678 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:53.678 { 00:11:53.678 "aliases": [ 00:11:53.678 "52bd936f-2d6a-4950-8f62-1bd77c5dcdb8" 00:11:53.678 ], 00:11:53.678 "assigned_rate_limits": { 00:11:53.678 "r_mbytes_per_sec": 0, 00:11:53.678 "rw_ios_per_sec": 0, 00:11:53.678 "rw_mbytes_per_sec": 0, 00:11:53.678 "w_mbytes_per_sec": 0 00:11:53.678 }, 00:11:53.678 "block_size": 512, 00:11:53.678 "claim_type": "exclusive_write", 00:11:53.678 "claimed": true, 00:11:53.678 "driver_specific": {}, 00:11:53.678 "memory_domains": [ 00:11:53.678 { 00:11:53.679 "dma_device_id": "system", 00:11:53.679 "dma_device_type": 1 00:11:53.679 }, 00:11:53.679 { 00:11:53.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.679 "dma_device_type": 2 00:11:53.679 } 00:11:53.679 ], 00:11:53.679 "name": "Malloc1", 00:11:53.679 "num_blocks": 1048576, 00:11:53.679 "product_name": "Malloc disk", 00:11:53.679 "supported_io_types": { 00:11:53.679 "abort": true, 00:11:53.679 "compare": false, 00:11:53.679 "compare_and_write": false, 00:11:53.679 "copy": true, 00:11:53.679 "flush": true, 00:11:53.679 "get_zone_info": false, 00:11:53.679 "nvme_admin": false, 00:11:53.679 "nvme_io": false, 00:11:53.679 "nvme_io_md": false, 00:11:53.679 "nvme_iov_md": false, 00:11:53.679 "read": true, 00:11:53.679 "reset": true, 00:11:53.679 "seek_data": false, 00:11:53.679 "seek_hole": false, 00:11:53.679 "unmap": true, 00:11:53.679 "write": true, 00:11:53.679 "write_zeroes": true, 00:11:53.679 "zcopy": true, 00:11:53.679 "zone_append": false, 00:11:53.679 "zone_management": false 00:11:53.679 }, 00:11:53.679 "uuid": "52bd936f-2d6a-4950-8f62-1bd77c5dcdb8", 00:11:53.679 "zoned": false 00:11:53.679 } 00:11:53.679 ]' 00:11:53.679 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:53.679 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:53.679 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:53.679 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:53.679 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:53.679 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:53.679 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:53.679 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:53.936 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:53.936 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:53.936 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.936 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:53.936 09:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:55.835 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:55.835 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:55.835 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:55.835 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:55.835 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.835 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:55.835 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:55.835 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:55.835 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:55.836 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:55.836 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:55.836 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:55.836 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:55.836 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:55.836 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:55.836 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:55.836 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:55.836 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:56.094 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:57.028 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:57.028 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:57.028 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:57.028 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.028 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.028 ************************************ 00:11:57.028 START TEST filesystem_in_capsule_ext4 00:11:57.028 ************************************ 00:11:57.028 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:57.028 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:57.028 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:57.028 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:57.028 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:57.028 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:57.028 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:57.028 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:57.028 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:57.028 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:57.028 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:57.028 mke2fs 1.47.0 (5-Feb-2023) 00:11:57.028 Discarding device blocks: 0/522240 done 00:11:57.028 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:57.028 Filesystem UUID: 9f6a9f42-da97-4829-a768-13e443b94b32 00:11:57.028 Superblock backups stored on blocks: 00:11:57.028 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:57.028 00:11:57.028 Allocating group tables: 0/64 done 00:11:57.028 Writing inode tables: 0/64 done 00:11:57.028 Creating journal (8192 blocks): done 00:11:57.287 Writing superblocks and filesystem accounting information: 0/64 done 00:11:57.287 00:11:57.287 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:57.287 09:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:02.568 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:02.568 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:02.568 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:02.568 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:02.568 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:02.568 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:02.568 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 73506 00:12:02.568 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:02.568 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:02.568 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:02.568 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:02.568 ************************************ 00:12:02.568 END TEST filesystem_in_capsule_ext4 00:12:02.568 ************************************ 00:12:02.568 00:12:02.568 real 0m5.683s 00:12:02.568 user 0m0.021s 00:12:02.568 sys 0m0.066s 00:12:02.568 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.568 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:02.827 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:02.828 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:02.828 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.828 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.828 ************************************ 00:12:02.828 START TEST filesystem_in_capsule_btrfs 00:12:02.828 ************************************ 00:12:02.828 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:02.828 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:02.828 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:02.828 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:02.828 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:02.828 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:02.828 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:02.828 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:02.828 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:02.828 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:02.828 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:02.828 btrfs-progs v6.8.1 00:12:02.828 See https://btrfs.readthedocs.io for more information. 00:12:02.828 00:12:02.828 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:02.828 NOTE: several default settings have changed in version 5.15, please make sure 00:12:02.828 this does not affect your deployments: 00:12:02.828 - DUP for metadata (-m dup) 00:12:02.828 - enabled no-holes (-O no-holes) 00:12:02.828 - enabled free-space-tree (-R free-space-tree) 00:12:02.828 00:12:02.828 Label: (null) 00:12:02.828 UUID: 80584a1c-671f-477d-9184-74adb0750c83 00:12:02.828 Node size: 16384 00:12:02.828 Sector size: 4096 (CPU page size: 4096) 00:12:02.828 Filesystem size: 510.00MiB 00:12:02.828 Block group profiles: 00:12:02.828 Data: single 8.00MiB 00:12:02.828 Metadata: DUP 32.00MiB 00:12:02.828 System: DUP 8.00MiB 00:12:02.828 SSD detected: yes 00:12:02.828 Zoned device: no 00:12:02.828 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:02.828 Checksum: crc32c 00:12:02.828 Number of devices: 1 00:12:02.828 Devices: 00:12:02.828 ID SIZE PATH 00:12:02.828 1 510.00MiB /dev/nvme0n1p1 00:12:02.828 00:12:02.828 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:02.828 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.087 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.087 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:03.087 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.087 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:03.087 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:03.087 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.087 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 73506 00:12:03.087 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.087 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.087 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.087 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.087 ************************************ 00:12:03.087 END TEST filesystem_in_capsule_btrfs 00:12:03.087 ************************************ 00:12:03.087 00:12:03.087 real 0m0.289s 00:12:03.087 user 0m0.019s 00:12:03.087 sys 0m0.064s 00:12:03.088 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.088 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:03.088 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:03.088 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:03.088 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.088 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.088 ************************************ 00:12:03.088 START TEST filesystem_in_capsule_xfs 00:12:03.088 ************************************ 00:12:03.088 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:03.088 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:03.088 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.088 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:03.088 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:03.088 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:03.088 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:03.088 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:03.088 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:03.088 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:03.088 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:03.347 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:03.347 = sectsz=512 attr=2, projid32bit=1 00:12:03.347 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:03.347 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:03.347 data = bsize=4096 blocks=130560, imaxpct=25 00:12:03.347 = sunit=0 swidth=0 blks 00:12:03.347 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:03.347 log =internal log bsize=4096 blocks=16384, version=2 00:12:03.347 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:03.347 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:03.915 Discarding blocks...Done. 00:12:03.915 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:03.915 09:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:05.820 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:05.820 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:05.820 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:05.820 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:05.820 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:05.820 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:05.820 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 73506 00:12:05.820 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:05.820 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:05.820 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:05.820 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:05.820 ************************************ 00:12:05.820 END TEST filesystem_in_capsule_xfs 00:12:05.820 ************************************ 00:12:05.820 00:12:05.820 real 0m2.714s 00:12:05.821 user 0m0.023s 00:12:05.821 sys 0m0.059s 00:12:05.821 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.821 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:05.821 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:05.821 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:05.821 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.080 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.080 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 73506 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 73506 ']' 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 73506 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73506 00:12:06.081 killing process with pid 73506 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73506' 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 73506 00:12:06.081 09:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 73506 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:08.615 00:12:08.615 real 0m16.480s 00:12:08.615 user 1m2.784s 00:12:08.615 sys 0m1.774s 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.615 ************************************ 00:12:08.615 END TEST nvmf_filesystem_in_capsule 00:12:08.615 ************************************ 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:08.615 rmmod nvme_tcp 00:12:08.615 rmmod nvme_fabrics 00:12:08.615 rmmod nvme_keyring 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:08.615 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:08.615 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:08.615 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:08.615 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:08.615 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:08.615 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:08.615 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:08.615 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:08.615 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:08.615 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:08.615 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:08.615 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.615 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.615 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.615 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:12:08.615 00:12:08.615 real 0m34.827s 00:12:08.616 user 2m7.589s 00:12:08.616 sys 0m4.222s 00:12:08.616 ************************************ 00:12:08.616 END TEST nvmf_filesystem 00:12:08.616 ************************************ 00:12:08.616 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.616 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:08.874 ************************************ 00:12:08.874 START TEST nvmf_target_discovery 00:12:08.874 ************************************ 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:08.874 * Looking for test storage... 00:12:08.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:08.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.874 --rc genhtml_branch_coverage=1 00:12:08.874 --rc genhtml_function_coverage=1 00:12:08.874 --rc genhtml_legend=1 00:12:08.874 --rc geninfo_all_blocks=1 00:12:08.874 --rc geninfo_unexecuted_blocks=1 00:12:08.874 00:12:08.874 ' 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:08.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.874 --rc genhtml_branch_coverage=1 00:12:08.874 --rc genhtml_function_coverage=1 00:12:08.874 --rc genhtml_legend=1 00:12:08.874 --rc geninfo_all_blocks=1 00:12:08.874 --rc geninfo_unexecuted_blocks=1 00:12:08.874 00:12:08.874 ' 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:08.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.874 --rc genhtml_branch_coverage=1 00:12:08.874 --rc genhtml_function_coverage=1 00:12:08.874 --rc genhtml_legend=1 00:12:08.874 --rc geninfo_all_blocks=1 00:12:08.874 --rc geninfo_unexecuted_blocks=1 00:12:08.874 00:12:08.874 ' 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:08.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.874 --rc genhtml_branch_coverage=1 00:12:08.874 --rc genhtml_function_coverage=1 00:12:08.874 --rc genhtml_legend=1 00:12:08.874 --rc geninfo_all_blocks=1 00:12:08.874 --rc geninfo_unexecuted_blocks=1 00:12:08.874 00:12:08.874 ' 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.874 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:08.874 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:09.132 Cannot find device "nvmf_init_br" 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:09.132 Cannot find device "nvmf_init_br2" 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:09.132 Cannot find device "nvmf_tgt_br" 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:09.132 Cannot find device "nvmf_tgt_br2" 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:09.132 Cannot find device "nvmf_init_br" 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:09.132 Cannot find device "nvmf_init_br2" 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:09.132 Cannot find device "nvmf_tgt_br" 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:09.132 Cannot find device "nvmf_tgt_br2" 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:09.132 Cannot find device "nvmf_br" 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:09.132 Cannot find device "nvmf_init_if" 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:09.132 Cannot find device "nvmf_init_if2" 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:09.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:09.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:09.132 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:09.391 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:09.391 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:12:09.391 00:12:09.391 --- 10.0.0.3 ping statistics --- 00:12:09.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.391 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:09.391 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:09.391 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:12:09.391 00:12:09.391 --- 10.0.0.4 ping statistics --- 00:12:09.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.391 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:09.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:09.391 00:12:09.391 --- 10.0.0.1 ping statistics --- 00:12:09.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.391 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:09.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:12:09.391 00:12:09.391 --- 10.0.0.2 ping statistics --- 00:12:09.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.391 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@461 -- # return 0 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=74135 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 74135 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 74135 ']' 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.391 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.650 [2024-12-05 09:43:57.059783] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:09.650 [2024-12-05 09:43:57.059993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.650 [2024-12-05 09:43:57.242583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.908 [2024-12-05 09:43:57.362447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.908 [2024-12-05 09:43:57.362525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.908 [2024-12-05 09:43:57.362545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.908 [2024-12-05 09:43:57.362556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.908 [2024-12-05 09:43:57.362571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.908 [2024-12-05 09:43:57.364793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.908 [2024-12-05 09:43:57.364966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.908 [2024-12-05 09:43:57.365097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.908 [2024-12-05 09:43:57.365747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.476 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.476 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:10.476 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:10.476 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:10.476 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.476 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.476 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:10.476 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.476 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.476 [2024-12-05 09:43:58.064625] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.476 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.476 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:10.476 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:10.476 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:10.476 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.476 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.476 Null1 00:12:10.476 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.476 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:10.476 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.477 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.736 [2024-12-05 09:43:58.127420] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.736 Null2 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:12:10.736 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.737 Null3 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.737 Null4 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -a 10.0.0.3 -s 4420 00:12:10.737 00:12:10.737 Discovery Log Number of Records 6, Generation counter 6 00:12:10.737 =====Discovery Log Entry 0====== 00:12:10.737 trtype: tcp 00:12:10.737 adrfam: ipv4 00:12:10.737 subtype: current discovery subsystem 00:12:10.737 treq: not required 00:12:10.737 portid: 0 00:12:10.737 trsvcid: 4420 00:12:10.737 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:10.737 traddr: 10.0.0.3 00:12:10.737 eflags: explicit discovery connections, duplicate discovery information 00:12:10.737 sectype: none 00:12:10.737 =====Discovery Log Entry 1====== 00:12:10.737 trtype: tcp 00:12:10.737 adrfam: ipv4 00:12:10.737 subtype: nvme subsystem 00:12:10.737 treq: not required 00:12:10.737 portid: 0 00:12:10.737 trsvcid: 4420 00:12:10.737 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:10.737 traddr: 10.0.0.3 00:12:10.737 eflags: none 00:12:10.737 sectype: none 00:12:10.737 =====Discovery Log Entry 2====== 00:12:10.737 trtype: tcp 00:12:10.737 adrfam: ipv4 00:12:10.737 subtype: nvme subsystem 00:12:10.737 treq: not required 00:12:10.737 portid: 0 00:12:10.737 trsvcid: 4420 00:12:10.737 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:10.737 traddr: 10.0.0.3 00:12:10.737 eflags: none 00:12:10.737 sectype: none 00:12:10.737 =====Discovery Log Entry 3====== 00:12:10.737 trtype: tcp 00:12:10.737 adrfam: ipv4 00:12:10.737 subtype: nvme subsystem 00:12:10.737 treq: not required 00:12:10.737 portid: 0 00:12:10.737 trsvcid: 4420 00:12:10.737 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:10.737 traddr: 10.0.0.3 00:12:10.737 eflags: none 00:12:10.737 sectype: none 00:12:10.737 =====Discovery Log Entry 4====== 00:12:10.737 trtype: tcp 00:12:10.737 adrfam: ipv4 00:12:10.737 subtype: nvme subsystem 00:12:10.737 treq: not required 00:12:10.737 portid: 0 00:12:10.737 trsvcid: 4420 00:12:10.737 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:10.737 traddr: 10.0.0.3 00:12:10.737 eflags: none 00:12:10.737 sectype: none 00:12:10.737 =====Discovery Log Entry 5====== 00:12:10.737 trtype: tcp 00:12:10.737 adrfam: ipv4 00:12:10.737 subtype: discovery subsystem referral 00:12:10.737 treq: not required 00:12:10.737 portid: 0 00:12:10.737 trsvcid: 4430 00:12:10.737 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:10.737 traddr: 10.0.0.3 00:12:10.737 eflags: none 00:12:10.737 sectype: none 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:10.737 Perform nvmf subsystem discovery via RPC 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.737 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.997 [ 00:12:10.997 { 00:12:10.997 "allow_any_host": true, 00:12:10.997 "hosts": [], 00:12:10.997 "listen_addresses": [ 00:12:10.997 { 00:12:10.997 "adrfam": "IPv4", 00:12:10.997 "traddr": "10.0.0.3", 00:12:10.997 "trsvcid": "4420", 00:12:10.997 "trtype": "TCP" 00:12:10.997 } 00:12:10.997 ], 00:12:10.997 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:10.997 "subtype": "Discovery" 00:12:10.997 }, 00:12:10.997 { 00:12:10.997 "allow_any_host": true, 00:12:10.997 "hosts": [], 00:12:10.997 "listen_addresses": [ 00:12:10.997 { 00:12:10.997 "adrfam": "IPv4", 00:12:10.997 "traddr": "10.0.0.3", 00:12:10.997 "trsvcid": "4420", 00:12:10.997 "trtype": "TCP" 00:12:10.997 } 00:12:10.997 ], 00:12:10.997 "max_cntlid": 65519, 00:12:10.997 "max_namespaces": 32, 00:12:10.997 "min_cntlid": 1, 00:12:10.997 "model_number": "SPDK bdev Controller", 00:12:10.997 "namespaces": [ 00:12:10.997 { 00:12:10.997 "bdev_name": "Null1", 00:12:10.997 "name": "Null1", 00:12:10.997 "nguid": "DC6370CFCCBC4F18A9B5BF08A91FDAC2", 00:12:10.997 "nsid": 1, 00:12:10.997 "uuid": "dc6370cf-ccbc-4f18-a9b5-bf08a91fdac2" 00:12:10.997 } 00:12:10.997 ], 00:12:10.997 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.997 "serial_number": "SPDK00000000000001", 00:12:10.997 "subtype": "NVMe" 00:12:10.997 }, 00:12:10.997 { 00:12:10.997 "allow_any_host": true, 00:12:10.997 "hosts": [], 00:12:10.997 "listen_addresses": [ 00:12:10.997 { 00:12:10.997 "adrfam": "IPv4", 00:12:10.997 "traddr": "10.0.0.3", 00:12:10.997 "trsvcid": "4420", 00:12:10.997 "trtype": "TCP" 00:12:10.997 } 00:12:10.997 ], 00:12:10.997 "max_cntlid": 65519, 00:12:10.997 "max_namespaces": 32, 00:12:10.997 "min_cntlid": 1, 00:12:10.997 "model_number": "SPDK bdev Controller", 00:12:10.997 "namespaces": [ 00:12:10.997 { 00:12:10.997 "bdev_name": "Null2", 00:12:10.997 "name": "Null2", 00:12:10.997 "nguid": "AD40EF3556D04D4AB85898EFB78B81E8", 00:12:10.997 "nsid": 1, 00:12:10.997 "uuid": "ad40ef35-56d0-4d4a-b858-98efb78b81e8" 00:12:10.997 } 00:12:10.997 ], 00:12:10.997 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:10.997 "serial_number": "SPDK00000000000002", 00:12:10.997 "subtype": "NVMe" 00:12:10.997 }, 00:12:10.997 { 00:12:10.997 "allow_any_host": true, 00:12:10.997 "hosts": [], 00:12:10.997 "listen_addresses": [ 00:12:10.997 { 00:12:10.997 "adrfam": "IPv4", 00:12:10.997 "traddr": "10.0.0.3", 00:12:10.997 "trsvcid": "4420", 00:12:10.997 "trtype": "TCP" 00:12:10.997 } 00:12:10.997 ], 00:12:10.997 "max_cntlid": 65519, 00:12:10.997 "max_namespaces": 32, 00:12:10.997 "min_cntlid": 1, 00:12:10.997 "model_number": "SPDK bdev Controller", 00:12:10.997 "namespaces": [ 00:12:10.997 { 00:12:10.997 "bdev_name": "Null3", 00:12:10.997 "name": "Null3", 00:12:10.997 "nguid": "0EC8081A5AED4EAD87A7BF822E9898AA", 00:12:10.997 "nsid": 1, 00:12:10.997 "uuid": "0ec8081a-5aed-4ead-87a7-bf822e9898aa" 00:12:10.997 } 00:12:10.997 ], 00:12:10.997 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:10.997 "serial_number": "SPDK00000000000003", 00:12:10.997 "subtype": "NVMe" 00:12:10.997 }, 00:12:10.997 { 00:12:10.997 "allow_any_host": true, 00:12:10.997 "hosts": [], 00:12:10.997 "listen_addresses": [ 00:12:10.997 { 00:12:10.997 "adrfam": "IPv4", 00:12:10.997 "traddr": "10.0.0.3", 00:12:10.997 "trsvcid": "4420", 00:12:10.997 "trtype": "TCP" 00:12:10.997 } 00:12:10.997 ], 00:12:10.997 "max_cntlid": 65519, 00:12:10.997 "max_namespaces": 32, 00:12:10.997 "min_cntlid": 1, 00:12:10.997 "model_number": "SPDK bdev Controller", 00:12:10.997 "namespaces": [ 00:12:10.997 { 00:12:10.997 "bdev_name": "Null4", 00:12:10.997 "name": "Null4", 00:12:10.997 "nguid": "E6308DCA020F4B8CAA934C225F1EC580", 00:12:10.997 "nsid": 1, 00:12:10.997 "uuid": "e6308dca-020f-4b8c-aa93-4c225f1ec580" 00:12:10.997 } 00:12:10.997 ], 00:12:10.997 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:10.997 "serial_number": "SPDK00000000000004", 00:12:10.997 "subtype": "NVMe" 00:12:10.997 } 00:12:10.997 ] 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.997 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:10.998 rmmod nvme_tcp 00:12:10.998 rmmod nvme_fabrics 00:12:10.998 rmmod nvme_keyring 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 74135 ']' 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 74135 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 74135 ']' 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 74135 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:10.998 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:11.257 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74135 00:12:11.257 killing process with pid 74135 00:12:11.257 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:11.257 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:11.257 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74135' 00:12:11.257 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 74135 00:12:11.257 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 74135 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:12.191 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:12.448 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:12.448 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:12.448 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:12.448 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.448 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.448 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.448 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:12:12.448 00:12:12.448 real 0m3.652s 00:12:12.448 user 0m8.998s 00:12:12.448 sys 0m0.991s 00:12:12.448 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.448 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.448 ************************************ 00:12:12.448 END TEST nvmf_target_discovery 00:12:12.448 ************************************ 00:12:12.448 09:43:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:12.448 09:43:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:12.448 09:43:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.448 09:43:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.448 ************************************ 00:12:12.448 START TEST nvmf_referrals 00:12:12.448 ************************************ 00:12:12.448 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:12.448 * Looking for test storage... 00:12:12.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:12.448 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:12.448 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:12.448 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:12.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.707 --rc genhtml_branch_coverage=1 00:12:12.707 --rc genhtml_function_coverage=1 00:12:12.707 --rc genhtml_legend=1 00:12:12.707 --rc geninfo_all_blocks=1 00:12:12.707 --rc geninfo_unexecuted_blocks=1 00:12:12.707 00:12:12.707 ' 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:12.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.707 --rc genhtml_branch_coverage=1 00:12:12.707 --rc genhtml_function_coverage=1 00:12:12.707 --rc genhtml_legend=1 00:12:12.707 --rc geninfo_all_blocks=1 00:12:12.707 --rc geninfo_unexecuted_blocks=1 00:12:12.707 00:12:12.707 ' 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:12.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.707 --rc genhtml_branch_coverage=1 00:12:12.707 --rc genhtml_function_coverage=1 00:12:12.707 --rc genhtml_legend=1 00:12:12.707 --rc geninfo_all_blocks=1 00:12:12.707 --rc geninfo_unexecuted_blocks=1 00:12:12.707 00:12:12.707 ' 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:12.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.707 --rc genhtml_branch_coverage=1 00:12:12.707 --rc genhtml_function_coverage=1 00:12:12.707 --rc genhtml_legend=1 00:12:12.707 --rc geninfo_all_blocks=1 00:12:12.707 --rc geninfo_unexecuted_blocks=1 00:12:12.707 00:12:12.707 ' 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.707 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:12.708 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:12.708 Cannot find device "nvmf_init_br" 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:12.708 Cannot find device "nvmf_init_br2" 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:12.708 Cannot find device "nvmf_tgt_br" 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:12.708 Cannot find device "nvmf_tgt_br2" 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:12.708 Cannot find device "nvmf_init_br" 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:12.708 Cannot find device "nvmf_init_br2" 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:12.708 Cannot find device "nvmf_tgt_br" 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:12.708 Cannot find device "nvmf_tgt_br2" 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:12.708 Cannot find device "nvmf_br" 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:12.708 Cannot find device "nvmf_init_if" 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:12.708 Cannot find device "nvmf_init_if2" 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:12.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:12:12.708 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:12.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:12.965 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:12.965 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:12:12.965 00:12:12.965 --- 10.0.0.3 ping statistics --- 00:12:12.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.965 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:12:12.965 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:13.223 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:13.223 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:12:13.223 00:12:13.223 --- 10.0.0.4 ping statistics --- 00:12:13.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.223 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:13.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:13.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:12:13.223 00:12:13.223 --- 10.0.0.1 ping statistics --- 00:12:13.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.223 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:13.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:13.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:12:13.223 00:12:13.223 --- 10.0.0.2 ping statistics --- 00:12:13.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.223 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@461 -- # return 0 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=74422 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 74422 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 74422 ']' 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.223 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.224 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.224 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.224 [2024-12-05 09:44:00.770141] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:13.224 [2024-12-05 09:44:00.770319] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.481 [2024-12-05 09:44:00.965397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:13.738 [2024-12-05 09:44:01.138141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.738 [2024-12-05 09:44:01.138233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.738 [2024-12-05 09:44:01.138262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.738 [2024-12-05 09:44:01.138280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.738 [2024-12-05 09:44:01.138302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.738 [2024-12-05 09:44:01.141278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.738 [2024-12-05 09:44:01.141431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.738 [2024-12-05 09:44:01.141538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.738 [2024-12-05 09:44:01.141662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.304 [2024-12-05 09:44:01.819723] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.304 [2024-12-05 09:44:01.844260] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:14.304 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:14.562 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:14.562 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:14.562 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.562 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:14.562 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.562 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.562 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:14.562 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:14.562 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:14.562 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:14.562 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:14.562 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:14.562 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:14.562 09:44:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:14.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:14.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:14.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:14.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:14.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:14.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.563 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.563 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:14.563 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.563 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.563 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:14.563 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:14.821 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:15.079 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:15.079 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:15.079 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:15.079 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:15.079 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:15.079 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:15.079 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:15.079 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:15.079 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:15.079 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:15.079 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:15.079 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:15.079 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:15.337 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:15.596 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:15.596 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:15.596 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:15.596 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:15.596 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:15.596 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:15.596 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:15.596 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:15.596 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:15.596 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:15.596 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:15.596 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:15.596 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:15.596 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:15.596 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:15.596 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.596 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -a 10.0.0.3 -s 8009 -o json 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:15.856 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:16.115 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:16.115 rmmod nvme_tcp 00:12:16.115 rmmod nvme_fabrics 00:12:16.115 rmmod nvme_keyring 00:12:16.115 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:16.115 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:16.115 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:16.115 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 74422 ']' 00:12:16.115 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 74422 00:12:16.115 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 74422 ']' 00:12:16.115 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 74422 00:12:16.115 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:16.115 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.115 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74422 00:12:16.115 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:16.115 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:16.115 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74422' 00:12:16.115 killing process with pid 74422 00:12:16.115 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 74422 00:12:16.115 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 74422 00:12:17.051 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:17.051 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:17.051 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:17.051 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:17.051 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:17.051 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:17.051 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:17.051 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:17.051 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:17.051 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:17.052 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:17.052 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:17.052 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:17.052 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:17.052 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:17.052 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:17.052 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:17.052 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:17.310 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:17.310 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:17.310 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:17.310 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:17.310 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:17.310 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.310 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.310 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.310 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:12:17.310 ************************************ 00:12:17.310 END TEST nvmf_referrals 00:12:17.310 ************************************ 00:12:17.310 00:12:17.310 real 0m4.865s 00:12:17.310 user 0m14.359s 00:12:17.310 sys 0m1.331s 00:12:17.310 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.310 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.310 09:44:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:17.310 09:44:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:17.310 09:44:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.310 09:44:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:17.310 ************************************ 00:12:17.310 START TEST nvmf_connect_disconnect 00:12:17.310 ************************************ 00:12:17.310 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:17.570 * Looking for test storage... 00:12:17.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:17.570 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:17.570 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:17.570 09:44:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:17.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.570 --rc genhtml_branch_coverage=1 00:12:17.570 --rc genhtml_function_coverage=1 00:12:17.570 --rc genhtml_legend=1 00:12:17.570 --rc geninfo_all_blocks=1 00:12:17.570 --rc geninfo_unexecuted_blocks=1 00:12:17.570 00:12:17.570 ' 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:17.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.570 --rc genhtml_branch_coverage=1 00:12:17.570 --rc genhtml_function_coverage=1 00:12:17.570 --rc genhtml_legend=1 00:12:17.570 --rc geninfo_all_blocks=1 00:12:17.570 --rc geninfo_unexecuted_blocks=1 00:12:17.570 00:12:17.570 ' 00:12:17.570 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:17.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.570 --rc genhtml_branch_coverage=1 00:12:17.570 --rc genhtml_function_coverage=1 00:12:17.570 --rc genhtml_legend=1 00:12:17.570 --rc geninfo_all_blocks=1 00:12:17.570 --rc geninfo_unexecuted_blocks=1 00:12:17.570 00:12:17.571 ' 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:17.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.571 --rc genhtml_branch_coverage=1 00:12:17.571 --rc genhtml_function_coverage=1 00:12:17.571 --rc genhtml_legend=1 00:12:17.571 --rc geninfo_all_blocks=1 00:12:17.571 --rc geninfo_unexecuted_blocks=1 00:12:17.571 00:12:17.571 ' 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:17.571 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:17.571 Cannot find device "nvmf_init_br" 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:17.571 Cannot find device "nvmf_init_br2" 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:17.571 Cannot find device "nvmf_tgt_br" 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:17.571 Cannot find device "nvmf_tgt_br2" 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:12:17.571 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:17.571 Cannot find device "nvmf_init_br" 00:12:17.572 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:12:17.572 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:17.572 Cannot find device "nvmf_init_br2" 00:12:17.572 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:12:17.572 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:17.829 Cannot find device "nvmf_tgt_br" 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:17.829 Cannot find device "nvmf_tgt_br2" 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:17.829 Cannot find device "nvmf_br" 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:17.829 Cannot find device "nvmf_init_if" 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:17.829 Cannot find device "nvmf_init_if2" 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:17.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:17.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:17.829 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:17.830 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:17.830 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:17.830 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:17.830 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:17.830 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:18.087 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:18.087 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:12:18.087 00:12:18.087 --- 10.0.0.3 ping statistics --- 00:12:18.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.087 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:18.087 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:18.087 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:12:18.087 00:12:18.087 --- 10.0.0.4 ping statistics --- 00:12:18.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.087 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:18.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:12:18.087 00:12:18.087 --- 10.0.0.1 ping statistics --- 00:12:18.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.087 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:18.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:12:18.087 00:12:18.087 --- 10.0.0.2 ping statistics --- 00:12:18.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.087 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@461 -- # return 0 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=74793 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 74793 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 74793 ']' 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.087 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.088 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.088 [2024-12-05 09:44:05.693511] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:18.088 [2024-12-05 09:44:05.694320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.345 [2024-12-05 09:44:05.875925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.603 [2024-12-05 09:44:05.997877] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.603 [2024-12-05 09:44:05.997963] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.603 [2024-12-05 09:44:05.997983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.603 [2024-12-05 09:44:05.997995] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.603 [2024-12-05 09:44:05.998010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.603 [2024-12-05 09:44:06.000196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.603 [2024-12-05 09:44:06.000367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.603 [2024-12-05 09:44:06.000460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.603 [2024-12-05 09:44:06.000660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.168 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.168 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:19.168 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:19.168 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:19.168 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.169 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.169 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:19.169 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.169 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.169 [2024-12-05 09:44:06.717044] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.169 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.169 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:19.169 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.169 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.427 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.427 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:19.427 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:19.427 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.427 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.427 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.427 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:19.427 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.427 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.427 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.427 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:19.427 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.427 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.427 [2024-12-05 09:44:06.839165] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:19.427 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.427 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:19.427 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:19.427 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:19.427 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:21.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.475 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:07.475 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:07.475 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:07.475 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:07.475 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:07.475 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:07.475 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:07.475 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:07.475 rmmod nvme_tcp 00:16:07.475 rmmod nvme_fabrics 00:16:07.475 rmmod nvme_keyring 00:16:07.475 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:07.475 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:07.475 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:07.475 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 74793 ']' 00:16:07.475 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 74793 00:16:07.475 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 74793 ']' 00:16:07.475 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 74793 00:16:07.475 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:07.475 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:07.475 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74793 00:16:07.475 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:07.475 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:07.475 killing process with pid 74793 00:16:07.475 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74793' 00:16:07.475 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 74793 00:16:07.475 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 74793 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:08.844 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:09.101 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:09.101 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.101 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:09.101 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.101 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:16:09.101 00:16:09.101 real 3m51.624s 00:16:09.101 user 15m2.964s 00:16:09.101 sys 0m18.921s 00:16:09.101 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.101 ************************************ 00:16:09.101 END TEST nvmf_connect_disconnect 00:16:09.101 ************************************ 00:16:09.101 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:09.101 09:47:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:09.101 09:47:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:09.101 09:47:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.101 09:47:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:09.101 ************************************ 00:16:09.101 START TEST nvmf_multitarget 00:16:09.101 ************************************ 00:16:09.101 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:09.101 * Looking for test storage... 00:16:09.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:09.101 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:09.101 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:16:09.101 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:09.101 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:09.101 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:09.102 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:09.102 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:09.102 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:09.102 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:09.102 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:09.102 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:09.102 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:09.102 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:09.102 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:09.102 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:09.102 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:09.102 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:09.102 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:09.102 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:09.102 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:09.102 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:09.102 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:09.102 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:09.102 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:09.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.360 --rc genhtml_branch_coverage=1 00:16:09.360 --rc genhtml_function_coverage=1 00:16:09.360 --rc genhtml_legend=1 00:16:09.360 --rc geninfo_all_blocks=1 00:16:09.360 --rc geninfo_unexecuted_blocks=1 00:16:09.360 00:16:09.360 ' 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:09.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.360 --rc genhtml_branch_coverage=1 00:16:09.360 --rc genhtml_function_coverage=1 00:16:09.360 --rc genhtml_legend=1 00:16:09.360 --rc geninfo_all_blocks=1 00:16:09.360 --rc geninfo_unexecuted_blocks=1 00:16:09.360 00:16:09.360 ' 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:09.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.360 --rc genhtml_branch_coverage=1 00:16:09.360 --rc genhtml_function_coverage=1 00:16:09.360 --rc genhtml_legend=1 00:16:09.360 --rc geninfo_all_blocks=1 00:16:09.360 --rc geninfo_unexecuted_blocks=1 00:16:09.360 00:16:09.360 ' 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:09.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.360 --rc genhtml_branch_coverage=1 00:16:09.360 --rc genhtml_function_coverage=1 00:16:09.360 --rc genhtml_legend=1 00:16:09.360 --rc geninfo_all_blocks=1 00:16:09.360 --rc geninfo_unexecuted_blocks=1 00:16:09.360 00:16:09.360 ' 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.360 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:09.361 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:09.361 Cannot find device "nvmf_init_br" 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:09.361 Cannot find device "nvmf_init_br2" 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:09.361 Cannot find device "nvmf_tgt_br" 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:09.361 Cannot find device "nvmf_tgt_br2" 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:09.361 Cannot find device "nvmf_init_br" 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:09.361 Cannot find device "nvmf_init_br2" 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:09.361 Cannot find device "nvmf_tgt_br" 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:09.361 Cannot find device "nvmf_tgt_br2" 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:09.361 Cannot find device "nvmf_br" 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:09.361 Cannot find device "nvmf_init_if" 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:09.361 Cannot find device "nvmf_init_if2" 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:09.361 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:09.361 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:09.361 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:09.619 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:09.619 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:09.619 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:09.619 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:09.619 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:16:09.619 00:16:09.619 --- 10.0.0.3 ping statistics --- 00:16:09.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.619 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:09.619 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:09.619 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:16:09.619 00:16:09.619 --- 10.0.0.4 ping statistics --- 00:16:09.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.619 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:09.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:16:09.619 00:16:09.619 --- 10.0.0.1 ping statistics --- 00:16:09.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.619 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:09.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:16:09.619 00:16:09.619 --- 10.0.0.2 ping statistics --- 00:16:09.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.619 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@461 -- # return 0 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:09.619 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:09.620 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:09.620 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:09.620 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:09.620 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=78635 00:16:09.620 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 78635 00:16:09.620 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:09.620 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 78635 ']' 00:16:09.620 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.620 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.620 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.620 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.620 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:09.878 [2024-12-05 09:47:57.290310] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:09.879 [2024-12-05 09:47:57.290505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.879 [2024-12-05 09:47:57.487819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:10.138 [2024-12-05 09:47:57.657493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.138 [2024-12-05 09:47:57.657615] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.138 [2024-12-05 09:47:57.657645] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.138 [2024-12-05 09:47:57.657663] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.138 [2024-12-05 09:47:57.657686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.138 [2024-12-05 09:47:57.660669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.138 [2024-12-05 09:47:57.660854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.138 [2024-12-05 09:47:57.661274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.138 [2024-12-05 09:47:57.661671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:10.703 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.703 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:10.703 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:10.703 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:10.703 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:10.703 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.703 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:10.703 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:10.703 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:10.961 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:10.961 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:10.961 "nvmf_tgt_1" 00:16:11.219 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:11.219 "nvmf_tgt_2" 00:16:11.219 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:11.219 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:11.219 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:11.219 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:11.478 true 00:16:11.478 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:11.478 true 00:16:11.478 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:11.478 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:11.735 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:11.736 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:11.736 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:11.736 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:11.736 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:11.736 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:11.736 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:11.736 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:11.736 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:11.736 rmmod nvme_tcp 00:16:11.736 rmmod nvme_fabrics 00:16:11.736 rmmod nvme_keyring 00:16:11.736 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:11.736 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:11.736 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:11.736 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 78635 ']' 00:16:11.736 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 78635 00:16:11.736 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 78635 ']' 00:16:11.736 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 78635 00:16:11.736 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:11.736 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:11.736 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78635 00:16:11.994 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:11.994 killing process with pid 78635 00:16:11.994 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:11.994 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78635' 00:16:11.994 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 78635 00:16:11.994 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 78635 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:12.929 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:13.188 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:13.188 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:13.188 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.188 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.188 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.188 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:16:13.188 00:16:13.188 real 0m4.061s 00:16:13.188 user 0m11.670s 00:16:13.188 sys 0m1.006s 00:16:13.188 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.188 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:13.188 ************************************ 00:16:13.188 END TEST nvmf_multitarget 00:16:13.188 ************************************ 00:16:13.188 09:48:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:13.188 09:48:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:13.188 09:48:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.188 09:48:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:13.188 ************************************ 00:16:13.188 START TEST nvmf_rpc 00:16:13.188 ************************************ 00:16:13.188 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:13.188 * Looking for test storage... 00:16:13.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:13.188 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:13.188 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:13.188 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:13.447 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:13.447 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.447 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.447 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.447 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.447 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.447 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.447 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.447 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.447 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.447 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.447 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.447 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:13.447 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:13.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.448 --rc genhtml_branch_coverage=1 00:16:13.448 --rc genhtml_function_coverage=1 00:16:13.448 --rc genhtml_legend=1 00:16:13.448 --rc geninfo_all_blocks=1 00:16:13.448 --rc geninfo_unexecuted_blocks=1 00:16:13.448 00:16:13.448 ' 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:13.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.448 --rc genhtml_branch_coverage=1 00:16:13.448 --rc genhtml_function_coverage=1 00:16:13.448 --rc genhtml_legend=1 00:16:13.448 --rc geninfo_all_blocks=1 00:16:13.448 --rc geninfo_unexecuted_blocks=1 00:16:13.448 00:16:13.448 ' 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:13.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.448 --rc genhtml_branch_coverage=1 00:16:13.448 --rc genhtml_function_coverage=1 00:16:13.448 --rc genhtml_legend=1 00:16:13.448 --rc geninfo_all_blocks=1 00:16:13.448 --rc geninfo_unexecuted_blocks=1 00:16:13.448 00:16:13.448 ' 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:13.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.448 --rc genhtml_branch_coverage=1 00:16:13.448 --rc genhtml_function_coverage=1 00:16:13.448 --rc genhtml_legend=1 00:16:13.448 --rc geninfo_all_blocks=1 00:16:13.448 --rc geninfo_unexecuted_blocks=1 00:16:13.448 00:16:13.448 ' 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:13.448 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:13.448 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:13.449 Cannot find device "nvmf_init_br" 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:13.449 Cannot find device "nvmf_init_br2" 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:13.449 Cannot find device "nvmf_tgt_br" 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:13.449 Cannot find device "nvmf_tgt_br2" 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:13.449 Cannot find device "nvmf_init_br" 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:13.449 Cannot find device "nvmf_init_br2" 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:13.449 Cannot find device "nvmf_tgt_br" 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:16:13.449 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:13.449 Cannot find device "nvmf_tgt_br2" 00:16:13.449 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:16:13.449 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:13.449 Cannot find device "nvmf_br" 00:16:13.449 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:16:13.449 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:13.449 Cannot find device "nvmf_init_if" 00:16:13.449 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:16:13.449 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:13.449 Cannot find device "nvmf_init_if2" 00:16:13.449 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:16:13.449 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:13.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.449 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:16:13.449 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:13.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.449 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:16:13.449 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:13.449 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:13.449 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:13.449 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:13.707 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:13.707 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:16:13.707 00:16:13.707 --- 10.0.0.3 ping statistics --- 00:16:13.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.707 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:13.707 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:13.707 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:16:13.707 00:16:13.707 --- 10.0.0.4 ping statistics --- 00:16:13.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.707 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:13.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:16:13.707 00:16:13.707 --- 10.0.0.1 ping statistics --- 00:16:13.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.707 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:13.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:16:13.707 00:16:13.707 --- 10.0.0.2 ping statistics --- 00:16:13.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.707 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@461 -- # return 0 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:13.707 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:13.965 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:13.965 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:13.965 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:13.965 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.965 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=78927 00:16:13.965 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:13.965 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 78927 00:16:13.965 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 78927 ']' 00:16:13.965 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.965 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.965 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.965 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.965 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.965 [2024-12-05 09:48:01.490023] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:13.965 [2024-12-05 09:48:01.490212] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.223 [2024-12-05 09:48:01.691219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:14.481 [2024-12-05 09:48:01.859331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.481 [2024-12-05 09:48:01.859405] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.481 [2024-12-05 09:48:01.859432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.481 [2024-12-05 09:48:01.859456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.481 [2024-12-05 09:48:01.859479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.481 [2024-12-05 09:48:01.862208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.481 [2024-12-05 09:48:01.862307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.481 [2024-12-05 09:48:01.862441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.481 [2024-12-05 09:48:01.862450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:15.048 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:15.048 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:15.048 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:15.048 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:15.048 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:15.049 "poll_groups": [ 00:16:15.049 { 00:16:15.049 "admin_qpairs": 0, 00:16:15.049 "completed_nvme_io": 0, 00:16:15.049 "current_admin_qpairs": 0, 00:16:15.049 "current_io_qpairs": 0, 00:16:15.049 "io_qpairs": 0, 00:16:15.049 "name": "nvmf_tgt_poll_group_000", 00:16:15.049 "pending_bdev_io": 0, 00:16:15.049 "transports": [] 00:16:15.049 }, 00:16:15.049 { 00:16:15.049 "admin_qpairs": 0, 00:16:15.049 "completed_nvme_io": 0, 00:16:15.049 "current_admin_qpairs": 0, 00:16:15.049 "current_io_qpairs": 0, 00:16:15.049 "io_qpairs": 0, 00:16:15.049 "name": "nvmf_tgt_poll_group_001", 00:16:15.049 "pending_bdev_io": 0, 00:16:15.049 "transports": [] 00:16:15.049 }, 00:16:15.049 { 00:16:15.049 "admin_qpairs": 0, 00:16:15.049 "completed_nvme_io": 0, 00:16:15.049 "current_admin_qpairs": 0, 00:16:15.049 "current_io_qpairs": 0, 00:16:15.049 "io_qpairs": 0, 00:16:15.049 "name": "nvmf_tgt_poll_group_002", 00:16:15.049 "pending_bdev_io": 0, 00:16:15.049 "transports": [] 00:16:15.049 }, 00:16:15.049 { 00:16:15.049 "admin_qpairs": 0, 00:16:15.049 "completed_nvme_io": 0, 00:16:15.049 "current_admin_qpairs": 0, 00:16:15.049 "current_io_qpairs": 0, 00:16:15.049 "io_qpairs": 0, 00:16:15.049 "name": "nvmf_tgt_poll_group_003", 00:16:15.049 "pending_bdev_io": 0, 00:16:15.049 "transports": [] 00:16:15.049 } 00:16:15.049 ], 00:16:15.049 "tick_rate": 2200000000 00:16:15.049 }' 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.049 [2024-12-05 09:48:02.649238] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.049 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:15.308 "poll_groups": [ 00:16:15.308 { 00:16:15.308 "admin_qpairs": 0, 00:16:15.308 "completed_nvme_io": 0, 00:16:15.308 "current_admin_qpairs": 0, 00:16:15.308 "current_io_qpairs": 0, 00:16:15.308 "io_qpairs": 0, 00:16:15.308 "name": "nvmf_tgt_poll_group_000", 00:16:15.308 "pending_bdev_io": 0, 00:16:15.308 "transports": [ 00:16:15.308 { 00:16:15.308 "trtype": "TCP" 00:16:15.308 } 00:16:15.308 ] 00:16:15.308 }, 00:16:15.308 { 00:16:15.308 "admin_qpairs": 0, 00:16:15.308 "completed_nvme_io": 0, 00:16:15.308 "current_admin_qpairs": 0, 00:16:15.308 "current_io_qpairs": 0, 00:16:15.308 "io_qpairs": 0, 00:16:15.308 "name": "nvmf_tgt_poll_group_001", 00:16:15.308 "pending_bdev_io": 0, 00:16:15.308 "transports": [ 00:16:15.308 { 00:16:15.308 "trtype": "TCP" 00:16:15.308 } 00:16:15.308 ] 00:16:15.308 }, 00:16:15.308 { 00:16:15.308 "admin_qpairs": 0, 00:16:15.308 "completed_nvme_io": 0, 00:16:15.308 "current_admin_qpairs": 0, 00:16:15.308 "current_io_qpairs": 0, 00:16:15.308 "io_qpairs": 0, 00:16:15.308 "name": "nvmf_tgt_poll_group_002", 00:16:15.308 "pending_bdev_io": 0, 00:16:15.308 "transports": [ 00:16:15.308 { 00:16:15.308 "trtype": "TCP" 00:16:15.308 } 00:16:15.308 ] 00:16:15.308 }, 00:16:15.308 { 00:16:15.308 "admin_qpairs": 0, 00:16:15.308 "completed_nvme_io": 0, 00:16:15.308 "current_admin_qpairs": 0, 00:16:15.308 "current_io_qpairs": 0, 00:16:15.308 "io_qpairs": 0, 00:16:15.308 "name": "nvmf_tgt_poll_group_003", 00:16:15.308 "pending_bdev_io": 0, 00:16:15.308 "transports": [ 00:16:15.308 { 00:16:15.308 "trtype": "TCP" 00:16:15.308 } 00:16:15.308 ] 00:16:15.308 } 00:16:15.308 ], 00:16:15.308 "tick_rate": 2200000000 00:16:15.308 }' 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.308 Malloc1 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.308 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:15.309 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.309 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.309 [2024-12-05 09:48:02.911167] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:15.309 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.309 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -a 10.0.0.3 -s 4420 00:16:15.309 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:15.309 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -a 10.0.0.3 -s 4420 00:16:15.309 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:15.309 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:15.309 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:15.309 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:15.309 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:15.309 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:15.309 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:15.309 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:15.309 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -a 10.0.0.3 -s 4420 00:16:15.568 [2024-12-05 09:48:02.943798] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516' 00:16:15.568 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:15.568 could not add new controller: failed to write to nvme-fabrics device 00:16:15.568 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:15.568 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:15.568 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:15.568 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:15.568 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:16:15.568 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.568 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.568 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.568 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:15.568 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:15.568 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:15.568 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.568 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:15.568 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:18.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:18.115 [2024-12-05 09:48:05.265197] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516' 00:16:18.115 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:18.115 could not add new controller: failed to write to nvme-fabrics device 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:18.115 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:20.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.016 [2024-12-05 09:48:07.592115] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.016 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:20.275 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:20.275 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:20.275 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:20.275 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:20.275 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:22.177 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:22.177 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:22.177 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:22.177 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:22.177 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:22.177 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:22.177 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.435 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:22.435 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:22.435 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:22.435 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.435 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:22.435 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.435 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:22.435 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:22.435 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.435 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.435 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.435 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.435 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.435 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.435 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.435 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:22.435 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.435 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.436 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.436 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.436 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:22.436 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.436 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.436 [2024-12-05 09:48:10.010534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:22.436 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.436 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:22.436 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.436 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.436 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.436 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.436 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.436 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.436 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.436 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:22.694 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:22.694 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:22.694 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:22.694 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:22.694 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:24.600 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:24.600 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:24.600 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:24.600 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:24.600 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:24.600 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:24.600 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:24.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.859 [2024-12-05 09:48:12.344005] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.859 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:25.118 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:25.118 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:25.118 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:25.118 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:25.118 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:27.029 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:27.029 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:27.029 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:27.029 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:27.029 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:27.029 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:27.029 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:27.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.289 [2024-12-05 09:48:14.765462] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.289 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:27.548 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:27.548 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:27.548 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:27.548 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:27.548 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:29.502 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:29.502 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:29.502 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:29.502 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:29.502 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:29.502 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:29.502 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:29.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.502 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.502 [2024-12-05 09:48:17.088863] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:29.792 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.792 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:29.792 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.792 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.792 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.792 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:29.792 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.792 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.792 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.792 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:29.792 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:29.793 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:29.793 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:29.793 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:29.793 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:31.694 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:31.694 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:31.694 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:31.694 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:31.694 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:31.694 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:31.694 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:31.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.952 [2024-12-05 09:48:19.526809] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.952 [2024-12-05 09:48:19.574856] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:31.952 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 [2024-12-05 09:48:19.626916] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 [2024-12-05 09:48:19.678982] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 [2024-12-05 09:48:19.731082] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:32.210 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.211 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.211 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.211 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:32.211 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.211 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.211 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.211 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:32.211 "poll_groups": [ 00:16:32.211 { 00:16:32.211 "admin_qpairs": 2, 00:16:32.211 "completed_nvme_io": 165, 00:16:32.211 "current_admin_qpairs": 0, 00:16:32.211 "current_io_qpairs": 0, 00:16:32.211 "io_qpairs": 16, 00:16:32.211 "name": "nvmf_tgt_poll_group_000", 00:16:32.211 "pending_bdev_io": 0, 00:16:32.211 "transports": [ 00:16:32.211 { 00:16:32.211 "trtype": "TCP" 00:16:32.211 } 00:16:32.211 ] 00:16:32.211 }, 00:16:32.211 { 00:16:32.211 "admin_qpairs": 3, 00:16:32.211 "completed_nvme_io": 67, 00:16:32.211 "current_admin_qpairs": 0, 00:16:32.211 "current_io_qpairs": 0, 00:16:32.211 "io_qpairs": 17, 00:16:32.211 "name": "nvmf_tgt_poll_group_001", 00:16:32.211 "pending_bdev_io": 0, 00:16:32.211 "transports": [ 00:16:32.211 { 00:16:32.211 "trtype": "TCP" 00:16:32.211 } 00:16:32.211 ] 00:16:32.211 }, 00:16:32.211 { 00:16:32.211 "admin_qpairs": 1, 00:16:32.211 "completed_nvme_io": 68, 00:16:32.211 "current_admin_qpairs": 0, 00:16:32.211 "current_io_qpairs": 0, 00:16:32.211 "io_qpairs": 19, 00:16:32.211 "name": "nvmf_tgt_poll_group_002", 00:16:32.211 "pending_bdev_io": 0, 00:16:32.211 "transports": [ 00:16:32.211 { 00:16:32.211 "trtype": "TCP" 00:16:32.211 } 00:16:32.211 ] 00:16:32.211 }, 00:16:32.211 { 00:16:32.211 "admin_qpairs": 1, 00:16:32.211 "completed_nvme_io": 120, 00:16:32.211 "current_admin_qpairs": 0, 00:16:32.211 "current_io_qpairs": 0, 00:16:32.211 "io_qpairs": 18, 00:16:32.211 "name": "nvmf_tgt_poll_group_003", 00:16:32.211 "pending_bdev_io": 0, 00:16:32.211 "transports": [ 00:16:32.211 { 00:16:32.211 "trtype": "TCP" 00:16:32.211 } 00:16:32.211 ] 00:16:32.211 } 00:16:32.211 ], 00:16:32.211 "tick_rate": 2200000000 00:16:32.211 }' 00:16:32.211 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:32.211 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:32.211 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:32.211 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:32.468 rmmod nvme_tcp 00:16:32.468 rmmod nvme_fabrics 00:16:32.468 rmmod nvme_keyring 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 78927 ']' 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 78927 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 78927 ']' 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 78927 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.468 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78927 00:16:32.468 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.468 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.468 killing process with pid 78927 00:16:32.468 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78927' 00:16:32.468 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 78927 00:16:32.468 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 78927 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:16:33.845 00:16:33.845 real 0m20.764s 00:16:33.845 user 1m16.093s 00:16:33.845 sys 0m2.594s 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:33.845 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.845 ************************************ 00:16:33.845 END TEST nvmf_rpc 00:16:33.845 ************************************ 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:34.105 ************************************ 00:16:34.105 START TEST nvmf_invalid 00:16:34.105 ************************************ 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:34.105 * Looking for test storage... 00:16:34.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:34.105 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:34.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.106 --rc genhtml_branch_coverage=1 00:16:34.106 --rc genhtml_function_coverage=1 00:16:34.106 --rc genhtml_legend=1 00:16:34.106 --rc geninfo_all_blocks=1 00:16:34.106 --rc geninfo_unexecuted_blocks=1 00:16:34.106 00:16:34.106 ' 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:34.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.106 --rc genhtml_branch_coverage=1 00:16:34.106 --rc genhtml_function_coverage=1 00:16:34.106 --rc genhtml_legend=1 00:16:34.106 --rc geninfo_all_blocks=1 00:16:34.106 --rc geninfo_unexecuted_blocks=1 00:16:34.106 00:16:34.106 ' 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:34.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.106 --rc genhtml_branch_coverage=1 00:16:34.106 --rc genhtml_function_coverage=1 00:16:34.106 --rc genhtml_legend=1 00:16:34.106 --rc geninfo_all_blocks=1 00:16:34.106 --rc geninfo_unexecuted_blocks=1 00:16:34.106 00:16:34.106 ' 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:34.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.106 --rc genhtml_branch_coverage=1 00:16:34.106 --rc genhtml_function_coverage=1 00:16:34.106 --rc genhtml_legend=1 00:16:34.106 --rc geninfo_all_blocks=1 00:16:34.106 --rc geninfo_unexecuted_blocks=1 00:16:34.106 00:16:34.106 ' 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:34.106 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:34.106 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:34.107 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:34.107 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.107 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:34.107 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:34.107 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:34.107 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:34.107 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:34.107 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:34.107 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.107 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:34.107 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:34.107 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:34.107 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:34.107 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:34.366 Cannot find device "nvmf_init_br" 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:34.366 Cannot find device "nvmf_init_br2" 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:34.366 Cannot find device "nvmf_tgt_br" 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:34.366 Cannot find device "nvmf_tgt_br2" 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:34.366 Cannot find device "nvmf_init_br" 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:34.366 Cannot find device "nvmf_init_br2" 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:34.366 Cannot find device "nvmf_tgt_br" 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:34.366 Cannot find device "nvmf_tgt_br2" 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:34.366 Cannot find device "nvmf_br" 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:34.366 Cannot find device "nvmf_init_if" 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:34.366 Cannot find device "nvmf_init_if2" 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:34.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:34.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:34.366 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:34.627 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:34.627 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:34.627 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:34.627 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:16:34.627 00:16:34.627 --- 10.0.0.3 ping statistics --- 00:16:34.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.627 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:34.627 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:34.627 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:16:34.627 00:16:34.627 --- 10.0.0.4 ping statistics --- 00:16:34.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.627 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:34.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:16:34.627 00:16:34.627 --- 10.0.0.1 ping statistics --- 00:16:34.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.627 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:34.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:16:34.627 00:16:34.627 --- 10.0.0.2 ping statistics --- 00:16:34.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.627 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@461 -- # return 0 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=79505 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 79505 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 79505 ']' 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.627 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:34.886 [2024-12-05 09:48:22.270028] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:34.886 [2024-12-05 09:48:22.270207] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.886 [2024-12-05 09:48:22.449706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:35.147 [2024-12-05 09:48:22.624382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.147 [2024-12-05 09:48:22.624452] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.147 [2024-12-05 09:48:22.624478] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.147 [2024-12-05 09:48:22.624495] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.147 [2024-12-05 09:48:22.624517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.147 [2024-12-05 09:48:22.627450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.147 [2024-12-05 09:48:22.627585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.147 [2024-12-05 09:48:22.627659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.147 [2024-12-05 09:48:22.627677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:35.715 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.715 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:16:35.715 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:35.715 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:35.715 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:35.976 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.976 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:35.976 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10629 00:16:36.236 [2024-12-05 09:48:23.640003] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:36.236 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/12/05 09:48:23 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode10629 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:16:36.236 request: 00:16:36.236 { 00:16:36.236 "method": "nvmf_create_subsystem", 00:16:36.236 "params": { 00:16:36.236 "nqn": "nqn.2016-06.io.spdk:cnode10629", 00:16:36.236 "tgt_name": "foobar" 00:16:36.236 } 00:16:36.236 } 00:16:36.236 Got JSON-RPC error response 00:16:36.236 GoRPCClient: error on JSON-RPC call' 00:16:36.236 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/12/05 09:48:23 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode10629 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:16:36.236 request: 00:16:36.236 { 00:16:36.236 "method": "nvmf_create_subsystem", 00:16:36.236 "params": { 00:16:36.236 "nqn": "nqn.2016-06.io.spdk:cnode10629", 00:16:36.236 "tgt_name": "foobar" 00:16:36.236 } 00:16:36.236 } 00:16:36.236 Got JSON-RPC error response 00:16:36.236 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:36.236 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:36.236 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27339 00:16:36.496 [2024-12-05 09:48:23.952399] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27339: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:36.496 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/12/05 09:48:23 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27339 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:16:36.496 request: 00:16:36.496 { 00:16:36.496 "method": "nvmf_create_subsystem", 00:16:36.496 "params": { 00:16:36.496 "nqn": "nqn.2016-06.io.spdk:cnode27339", 00:16:36.496 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:16:36.496 } 00:16:36.496 } 00:16:36.496 Got JSON-RPC error response 00:16:36.496 GoRPCClient: error on JSON-RPC call' 00:16:36.496 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/12/05 09:48:23 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27339 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:16:36.496 request: 00:16:36.496 { 00:16:36.496 "method": "nvmf_create_subsystem", 00:16:36.496 "params": { 00:16:36.496 "nqn": "nqn.2016-06.io.spdk:cnode27339", 00:16:36.496 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:16:36.496 } 00:16:36.496 } 00:16:36.496 Got JSON-RPC error response 00:16:36.496 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:36.496 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:36.496 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13999 00:16:36.756 [2024-12-05 09:48:24.192661] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13999: invalid model number 'SPDK_Controller' 00:16:36.756 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/12/05 09:48:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode13999], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:16:36.756 request: 00:16:36.756 { 00:16:36.756 "method": "nvmf_create_subsystem", 00:16:36.756 "params": { 00:16:36.756 "nqn": "nqn.2016-06.io.spdk:cnode13999", 00:16:36.756 "model_number": "SPDK_Controller\u001f" 00:16:36.756 } 00:16:36.756 } 00:16:36.756 Got JSON-RPC error response 00:16:36.756 GoRPCClient: error on JSON-RPC call' 00:16:36.756 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/12/05 09:48:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode13999], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:16:36.756 request: 00:16:36.756 { 00:16:36.756 "method": "nvmf_create_subsystem", 00:16:36.756 "params": { 00:16:36.756 "nqn": "nqn.2016-06.io.spdk:cnode13999", 00:16:36.756 "model_number": "SPDK_Controller\u001f" 00:16:36.756 } 00:16:36.756 } 00:16:36.756 Got JSON-RPC error response 00:16:36.756 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:36.756 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:36.756 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 8 == \- ]] 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '8|J`:M2Ue4)imyec;@hP,' 00:16:36.757 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '8|J`:M2Ue4)imyec;@hP,' nqn.2016-06.io.spdk:cnode11698 00:16:37.017 [2024-12-05 09:48:24.557185] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11698: invalid serial number '8|J`:M2Ue4)imyec;@hP,' 00:16:37.017 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/12/05 09:48:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode11698 serial_number:8|J`:M2Ue4)imyec;@hP,], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 8|J`:M2Ue4)imyec;@hP, 00:16:37.017 request: 00:16:37.017 { 00:16:37.017 "method": "nvmf_create_subsystem", 00:16:37.017 "params": { 00:16:37.017 "nqn": "nqn.2016-06.io.spdk:cnode11698", 00:16:37.017 "serial_number": "8|J`:M2Ue4)imyec;@hP," 00:16:37.017 } 00:16:37.017 } 00:16:37.017 Got JSON-RPC error response 00:16:37.017 GoRPCClient: error on JSON-RPC call' 00:16:37.017 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/12/05 09:48:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode11698 serial_number:8|J`:M2Ue4)imyec;@hP,], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 8|J`:M2Ue4)imyec;@hP, 00:16:37.017 request: 00:16:37.017 { 00:16:37.017 "method": "nvmf_create_subsystem", 00:16:37.017 "params": { 00:16:37.017 "nqn": "nqn.2016-06.io.spdk:cnode11698", 00:16:37.017 "serial_number": "8|J`:M2Ue4)imyec;@hP," 00:16:37.017 } 00:16:37.017 } 00:16:37.017 Got JSON-RPC error response 00:16:37.017 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:37.017 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:37.017 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:37.017 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:37.017 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:37.017 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:37.017 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:37.017 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.017 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:37.017 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:37.017 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:37.018 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:16:37.279 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ C == \- ]] 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'C(&v`ELjq5Vg:>J`7 qcKS&;5 E JU1r\bY>XyS\u' 00:16:37.280 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'C(&v`ELjq5Vg:>J`7 qcKS&;5 E JU1r\bY>XyS\u' nqn.2016-06.io.spdk:cnode12981 00:16:37.540 [2024-12-05 09:48:25.106085] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12981: invalid model number 'C(&v`ELjq5Vg:>J`7 qcKS&;5 E JU1r\bY>XyS\u' 00:16:37.540 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/12/05 09:48:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:C(&v`ELjq5Vg:>J`7 qcKS&;5 E JU1r\bY>XyS\u nqn:nqn.2016-06.io.spdk:cnode12981], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN C(&v`ELjq5Vg:>J`7 qcKS&;5 E JU1r\bY>XyS\u 00:16:37.540 request: 00:16:37.540 { 00:16:37.540 "method": "nvmf_create_subsystem", 00:16:37.540 "params": { 00:16:37.540 "nqn": "nqn.2016-06.io.spdk:cnode12981", 00:16:37.540 "model_number": "C(&v`ELjq5Vg:>J`7 qcKS&;5 E JU1r\\bY>XyS\\u" 00:16:37.540 } 00:16:37.540 } 00:16:37.540 Got JSON-RPC error response 00:16:37.540 GoRPCClient: error on JSON-RPC call' 00:16:37.540 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/12/05 09:48:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:C(&v`ELjq5Vg:>J`7 qcKS&;5 E JU1r\bY>XyS\u nqn:nqn.2016-06.io.spdk:cnode12981], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN C(&v`ELjq5Vg:>J`7 qcKS&;5 E JU1r\bY>XyS\u 00:16:37.540 request: 00:16:37.540 { 00:16:37.540 "method": "nvmf_create_subsystem", 00:16:37.540 "params": { 00:16:37.540 "nqn": "nqn.2016-06.io.spdk:cnode12981", 00:16:37.540 "model_number": "C(&v`ELjq5Vg:>J`7 qcKS&;5 E JU1r\\bY>XyS\\u" 00:16:37.540 } 00:16:37.540 } 00:16:37.540 Got JSON-RPC error response 00:16:37.540 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:37.540 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:38.105 [2024-12-05 09:48:25.430731] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:38.105 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:38.364 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:38.364 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:38.364 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:38.364 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:38.364 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:38.364 [2024-12-05 09:48:25.955984] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:38.364 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/12/05 09:48:25 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:16:38.364 request: 00:16:38.364 { 00:16:38.364 "method": "nvmf_subsystem_remove_listener", 00:16:38.364 "params": { 00:16:38.364 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:38.364 "listen_address": { 00:16:38.364 "trtype": "tcp", 00:16:38.364 "traddr": "", 00:16:38.364 "trsvcid": "4421" 00:16:38.364 } 00:16:38.364 } 00:16:38.364 } 00:16:38.364 Got JSON-RPC error response 00:16:38.364 GoRPCClient: error on JSON-RPC call' 00:16:38.364 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/12/05 09:48:25 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:16:38.364 request: 00:16:38.364 { 00:16:38.364 "method": "nvmf_subsystem_remove_listener", 00:16:38.364 "params": { 00:16:38.364 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:38.364 "listen_address": { 00:16:38.364 "trtype": "tcp", 00:16:38.364 "traddr": "", 00:16:38.364 "trsvcid": "4421" 00:16:38.364 } 00:16:38.364 } 00:16:38.364 } 00:16:38.364 Got JSON-RPC error response 00:16:38.364 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:38.364 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21850 -i 0 00:16:38.932 [2024-12-05 09:48:26.260347] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21850: invalid cntlid range [0-65519] 00:16:38.932 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/12/05 09:48:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode21850], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:16:38.932 request: 00:16:38.932 { 00:16:38.932 "method": "nvmf_create_subsystem", 00:16:38.932 "params": { 00:16:38.932 "nqn": "nqn.2016-06.io.spdk:cnode21850", 00:16:38.932 "min_cntlid": 0 00:16:38.932 } 00:16:38.932 } 00:16:38.932 Got JSON-RPC error response 00:16:38.932 GoRPCClient: error on JSON-RPC call' 00:16:38.932 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/12/05 09:48:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode21850], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:16:38.932 request: 00:16:38.932 { 00:16:38.932 "method": "nvmf_create_subsystem", 00:16:38.932 "params": { 00:16:38.932 "nqn": "nqn.2016-06.io.spdk:cnode21850", 00:16:38.932 "min_cntlid": 0 00:16:38.932 } 00:16:38.932 } 00:16:38.932 Got JSON-RPC error response 00:16:38.932 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:38.932 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13592 -i 65520 00:16:39.191 [2024-12-05 09:48:26.575169] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13592: invalid cntlid range [65520-65519] 00:16:39.191 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/12/05 09:48:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode13592], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:16:39.191 request: 00:16:39.191 { 00:16:39.191 "method": "nvmf_create_subsystem", 00:16:39.191 "params": { 00:16:39.191 "nqn": "nqn.2016-06.io.spdk:cnode13592", 00:16:39.191 "min_cntlid": 65520 00:16:39.191 } 00:16:39.191 } 00:16:39.191 Got JSON-RPC error response 00:16:39.191 GoRPCClient: error on JSON-RPC call' 00:16:39.191 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/12/05 09:48:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode13592], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:16:39.191 request: 00:16:39.191 { 00:16:39.191 "method": "nvmf_create_subsystem", 00:16:39.191 "params": { 00:16:39.191 "nqn": "nqn.2016-06.io.spdk:cnode13592", 00:16:39.191 "min_cntlid": 65520 00:16:39.191 } 00:16:39.191 } 00:16:39.191 Got JSON-RPC error response 00:16:39.191 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:39.191 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22073 -I 0 00:16:39.191 [2024-12-05 09:48:26.803471] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22073: invalid cntlid range [1-0] 00:16:39.451 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/12/05 09:48:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode22073], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:16:39.451 request: 00:16:39.451 { 00:16:39.451 "method": "nvmf_create_subsystem", 00:16:39.451 "params": { 00:16:39.451 "nqn": "nqn.2016-06.io.spdk:cnode22073", 00:16:39.451 "max_cntlid": 0 00:16:39.451 } 00:16:39.451 } 00:16:39.451 Got JSON-RPC error response 00:16:39.451 GoRPCClient: error on JSON-RPC call' 00:16:39.451 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/12/05 09:48:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode22073], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:16:39.451 request: 00:16:39.451 { 00:16:39.451 "method": "nvmf_create_subsystem", 00:16:39.451 "params": { 00:16:39.451 "nqn": "nqn.2016-06.io.spdk:cnode22073", 00:16:39.451 "max_cntlid": 0 00:16:39.451 } 00:16:39.451 } 00:16:39.451 Got JSON-RPC error response 00:16:39.451 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:39.451 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23143 -I 65520 00:16:39.451 [2024-12-05 09:48:27.031849] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23143: invalid cntlid range [1-65520] 00:16:39.451 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/12/05 09:48:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode23143], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:16:39.451 request: 00:16:39.451 { 00:16:39.451 "method": "nvmf_create_subsystem", 00:16:39.451 "params": { 00:16:39.451 "nqn": "nqn.2016-06.io.spdk:cnode23143", 00:16:39.451 "max_cntlid": 65520 00:16:39.451 } 00:16:39.451 } 00:16:39.451 Got JSON-RPC error response 00:16:39.451 GoRPCClient: error on JSON-RPC call' 00:16:39.451 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/12/05 09:48:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode23143], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:16:39.451 request: 00:16:39.451 { 00:16:39.451 "method": "nvmf_create_subsystem", 00:16:39.451 "params": { 00:16:39.451 "nqn": "nqn.2016-06.io.spdk:cnode23143", 00:16:39.451 "max_cntlid": 65520 00:16:39.451 } 00:16:39.451 } 00:16:39.451 Got JSON-RPC error response 00:16:39.451 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:39.451 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18192 -i 6 -I 5 00:16:39.711 [2024-12-05 09:48:27.260202] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18192: invalid cntlid range [6-5] 00:16:39.711 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/12/05 09:48:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode18192], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:16:39.711 request: 00:16:39.711 { 00:16:39.711 "method": "nvmf_create_subsystem", 00:16:39.711 "params": { 00:16:39.711 "nqn": "nqn.2016-06.io.spdk:cnode18192", 00:16:39.711 "min_cntlid": 6, 00:16:39.711 "max_cntlid": 5 00:16:39.711 } 00:16:39.711 } 00:16:39.711 Got JSON-RPC error response 00:16:39.711 GoRPCClient: error on JSON-RPC call' 00:16:39.711 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/12/05 09:48:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode18192], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:16:39.711 request: 00:16:39.711 { 00:16:39.711 "method": "nvmf_create_subsystem", 00:16:39.711 "params": { 00:16:39.711 "nqn": "nqn.2016-06.io.spdk:cnode18192", 00:16:39.711 "min_cntlid": 6, 00:16:39.711 "max_cntlid": 5 00:16:39.711 } 00:16:39.711 } 00:16:39.711 Got JSON-RPC error response 00:16:39.711 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:39.711 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:39.970 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:39.970 { 00:16:39.970 "name": "foobar", 00:16:39.970 "method": "nvmf_delete_target", 00:16:39.970 "req_id": 1 00:16:39.970 } 00:16:39.970 Got JSON-RPC error response 00:16:39.970 response: 00:16:39.970 { 00:16:39.970 "code": -32602, 00:16:39.970 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:39.970 }' 00:16:39.970 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:39.970 { 00:16:39.970 "name": "foobar", 00:16:39.970 "method": "nvmf_delete_target", 00:16:39.970 "req_id": 1 00:16:39.970 } 00:16:39.970 Got JSON-RPC error response 00:16:39.970 response: 00:16:39.970 { 00:16:39.970 "code": -32602, 00:16:39.970 "message": "The specified target doesn't exist, cannot delete it." 00:16:39.970 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:39.970 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:39.970 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:39.970 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:39.970 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:16:39.970 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:39.970 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:16:39.970 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:39.970 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:39.970 rmmod nvme_tcp 00:16:39.970 rmmod nvme_fabrics 00:16:39.970 rmmod nvme_keyring 00:16:39.970 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:39.970 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:16:39.970 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:16:39.970 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 79505 ']' 00:16:39.970 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 79505 00:16:39.970 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 79505 ']' 00:16:39.971 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 79505 00:16:39.971 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:16:39.971 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.971 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79505 00:16:39.971 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:39.971 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:39.971 killing process with pid 79505 00:16:39.971 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79505' 00:16:39.971 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 79505 00:16:39.971 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 79505 00:16:40.909 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:40.909 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:40.909 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:40.909 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:16:40.909 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:16:40.909 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:40.909 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:16:40.909 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:40.909 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:40.909 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:41.168 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:41.168 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:41.168 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:41.168 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:41.168 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:41.168 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:41.168 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:41.168 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:41.168 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:41.168 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:41.168 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:41.168 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:41.168 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:41.168 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.168 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.168 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.168 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:16:41.168 00:16:41.168 real 0m7.269s 00:16:41.168 user 0m26.581s 00:16:41.168 sys 0m1.654s 00:16:41.168 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.168 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:41.168 ************************************ 00:16:41.168 END TEST nvmf_invalid 00:16:41.168 ************************************ 00:16:41.428 09:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:41.428 09:48:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:41.428 09:48:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.428 09:48:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:41.428 ************************************ 00:16:41.428 START TEST nvmf_connect_stress 00:16:41.428 ************************************ 00:16:41.428 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:41.428 * Looking for test storage... 00:16:41.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:41.428 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:41.428 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:16:41.428 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:41.428 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:41.428 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:41.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.429 --rc genhtml_branch_coverage=1 00:16:41.429 --rc genhtml_function_coverage=1 00:16:41.429 --rc genhtml_legend=1 00:16:41.429 --rc geninfo_all_blocks=1 00:16:41.429 --rc geninfo_unexecuted_blocks=1 00:16:41.429 00:16:41.429 ' 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:41.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.429 --rc genhtml_branch_coverage=1 00:16:41.429 --rc genhtml_function_coverage=1 00:16:41.429 --rc genhtml_legend=1 00:16:41.429 --rc geninfo_all_blocks=1 00:16:41.429 --rc geninfo_unexecuted_blocks=1 00:16:41.429 00:16:41.429 ' 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:41.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.429 --rc genhtml_branch_coverage=1 00:16:41.429 --rc genhtml_function_coverage=1 00:16:41.429 --rc genhtml_legend=1 00:16:41.429 --rc geninfo_all_blocks=1 00:16:41.429 --rc geninfo_unexecuted_blocks=1 00:16:41.429 00:16:41.429 ' 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:41.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.429 --rc genhtml_branch_coverage=1 00:16:41.429 --rc genhtml_function_coverage=1 00:16:41.429 --rc genhtml_legend=1 00:16:41.429 --rc geninfo_all_blocks=1 00:16:41.429 --rc geninfo_unexecuted_blocks=1 00:16:41.429 00:16:41.429 ' 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:41.429 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:41.429 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:41.689 Cannot find device "nvmf_init_br" 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:41.689 Cannot find device "nvmf_init_br2" 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:41.689 Cannot find device "nvmf_tgt_br" 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:41.689 Cannot find device "nvmf_tgt_br2" 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:41.689 Cannot find device "nvmf_init_br" 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:41.689 Cannot find device "nvmf_init_br2" 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:41.689 Cannot find device "nvmf_tgt_br" 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:41.689 Cannot find device "nvmf_tgt_br2" 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:41.689 Cannot find device "nvmf_br" 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:41.689 Cannot find device "nvmf_init_if" 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:41.689 Cannot find device "nvmf_init_if2" 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:41.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:41.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:41.689 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:41.949 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:41.949 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:41.949 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:41.949 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:41.949 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:41.949 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:41.949 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:41.949 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:41.949 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:41.949 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:41.949 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:41.949 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:41.949 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:41.949 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:41.949 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:41.949 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:41.949 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:41.949 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:41.949 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:41.950 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:41.950 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:16:41.950 00:16:41.950 --- 10.0.0.3 ping statistics --- 00:16:41.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.950 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:41.950 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:41.950 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:16:41.950 00:16:41.950 --- 10.0.0.4 ping statistics --- 00:16:41.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.950 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:41.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:41.950 00:16:41.950 --- 10.0.0.1 ping statistics --- 00:16:41.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.950 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:41.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:16:41.950 00:16:41.950 --- 10.0.0.2 ping statistics --- 00:16:41.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.950 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@461 -- # return 0 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=80075 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 80075 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 80075 ']' 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:41.950 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:42.209 [2024-12-05 09:48:29.654589] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:42.209 [2024-12-05 09:48:29.654779] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.469 [2024-12-05 09:48:29.839549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:42.469 [2024-12-05 09:48:29.957224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.469 [2024-12-05 09:48:29.957302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.469 [2024-12-05 09:48:29.957320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.469 [2024-12-05 09:48:29.957332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.469 [2024-12-05 09:48:29.957346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.469 [2024-12-05 09:48:29.959415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.469 [2024-12-05 09:48:29.959556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.469 [2024-12-05 09:48:29.959564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:43.038 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.038 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:16:43.038 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:43.038 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:43.038 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.298 [2024-12-05 09:48:30.686316] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.298 [2024-12-05 09:48:30.710597] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.298 NULL1 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=80127 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.298 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.558 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.558 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:43.558 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.558 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.558 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.125 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.125 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:44.125 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.125 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.125 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.383 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.383 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:44.383 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.383 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.383 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.641 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.641 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:44.642 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.642 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.642 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.900 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.900 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:44.900 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.900 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.900 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.159 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.159 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:45.159 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.159 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.159 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.727 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.727 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:45.727 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.727 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.727 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.986 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.986 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:45.986 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.986 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.986 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.245 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.245 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:46.245 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.245 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.245 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.504 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.504 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:46.504 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.504 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.504 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.764 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.764 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:46.764 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.764 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.764 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.332 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.332 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:47.332 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.332 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.332 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.591 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.591 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:47.591 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.591 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.591 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.849 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.849 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:47.849 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.849 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.849 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.107 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.107 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:48.107 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.107 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.107 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.674 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.674 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:48.674 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.674 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.674 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.933 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.933 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:48.933 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.933 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.933 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.192 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.192 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:49.192 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.192 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.192 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.450 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.450 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:49.450 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.450 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.450 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.709 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.709 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:49.709 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.709 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.709 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.277 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.277 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:50.277 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.277 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.277 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.536 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.536 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:50.536 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.536 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.536 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.795 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.795 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:50.795 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.795 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.795 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.054 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.054 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:51.054 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.054 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.054 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.313 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.313 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:51.313 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.313 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.313 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.881 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.881 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:51.881 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.881 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.881 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.140 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.140 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:52.140 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.140 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.140 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.400 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.400 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:52.400 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.400 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.400 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.659 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.659 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:52.659 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.659 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.659 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.227 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.227 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:53.227 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.227 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.227 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.486 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.486 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:53.486 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.486 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.486 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.486 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 80127 00:16:53.745 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (80127) - No such process 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 80127 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:53.745 rmmod nvme_tcp 00:16:53.745 rmmod nvme_fabrics 00:16:53.745 rmmod nvme_keyring 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 80075 ']' 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 80075 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 80075 ']' 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 80075 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80075 00:16:53.745 killing process with pid 80075 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80075' 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 80075 00:16:53.745 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 80075 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:16:55.124 00:16:55.124 real 0m13.779s 00:16:55.124 user 0m43.605s 00:16:55.124 sys 0m3.578s 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.124 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.124 ************************************ 00:16:55.124 END TEST nvmf_connect_stress 00:16:55.124 ************************************ 00:16:55.125 09:48:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:55.125 09:48:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:55.125 09:48:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.125 09:48:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:55.125 ************************************ 00:16:55.125 START TEST nvmf_fused_ordering 00:16:55.125 ************************************ 00:16:55.125 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:55.125 * Looking for test storage... 00:16:55.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:55.125 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:55.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.384 --rc genhtml_branch_coverage=1 00:16:55.384 --rc genhtml_function_coverage=1 00:16:55.384 --rc genhtml_legend=1 00:16:55.384 --rc geninfo_all_blocks=1 00:16:55.384 --rc geninfo_unexecuted_blocks=1 00:16:55.384 00:16:55.384 ' 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:55.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.384 --rc genhtml_branch_coverage=1 00:16:55.384 --rc genhtml_function_coverage=1 00:16:55.384 --rc genhtml_legend=1 00:16:55.384 --rc geninfo_all_blocks=1 00:16:55.384 --rc geninfo_unexecuted_blocks=1 00:16:55.384 00:16:55.384 ' 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:55.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.384 --rc genhtml_branch_coverage=1 00:16:55.384 --rc genhtml_function_coverage=1 00:16:55.384 --rc genhtml_legend=1 00:16:55.384 --rc geninfo_all_blocks=1 00:16:55.384 --rc geninfo_unexecuted_blocks=1 00:16:55.384 00:16:55.384 ' 00:16:55.384 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:55.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.384 --rc genhtml_branch_coverage=1 00:16:55.384 --rc genhtml_function_coverage=1 00:16:55.384 --rc genhtml_legend=1 00:16:55.384 --rc geninfo_all_blocks=1 00:16:55.384 --rc geninfo_unexecuted_blocks=1 00:16:55.384 00:16:55.384 ' 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:55.385 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:55.385 Cannot find device "nvmf_init_br" 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:55.385 Cannot find device "nvmf_init_br2" 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:55.385 Cannot find device "nvmf_tgt_br" 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:55.385 Cannot find device "nvmf_tgt_br2" 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:55.385 Cannot find device "nvmf_init_br" 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:55.385 Cannot find device "nvmf_init_br2" 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:55.385 Cannot find device "nvmf_tgt_br" 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:55.385 Cannot find device "nvmf_tgt_br2" 00:16:55.385 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:16:55.386 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:55.386 Cannot find device "nvmf_br" 00:16:55.386 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:16:55.386 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:55.386 Cannot find device "nvmf_init_if" 00:16:55.386 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:16:55.386 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:55.645 Cannot find device "nvmf_init_if2" 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:55.645 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:55.645 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:55.645 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:55.645 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:16:55.645 00:16:55.645 --- 10.0.0.3 ping statistics --- 00:16:55.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.645 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:55.645 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:55.645 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:16:55.645 00:16:55.645 --- 10.0.0.4 ping statistics --- 00:16:55.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.645 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:16:55.645 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:55.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:55.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:16:55.645 00:16:55.645 --- 10.0.0.1 ping statistics --- 00:16:55.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.645 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:55.646 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:55.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:55.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:16:55.646 00:16:55.646 --- 10.0.0.2 ping statistics --- 00:16:55.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.646 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:55.646 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:55.646 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@461 -- # return 0 00:16:55.646 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:55.646 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:55.646 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:55.646 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:55.646 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:55.646 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:55.646 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:55.908 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:55.908 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:55.908 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:55.908 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:55.908 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=80530 00:16:55.908 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 80530 00:16:55.908 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:55.908 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 80530 ']' 00:16:55.908 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.908 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.908 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.908 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.908 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:55.908 [2024-12-05 09:48:43.393489] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:55.908 [2024-12-05 09:48:43.393638] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.211 [2024-12-05 09:48:43.564647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.211 [2024-12-05 09:48:43.687527] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.211 [2024-12-05 09:48:43.687597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.211 [2024-12-05 09:48:43.687616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.211 [2024-12-05 09:48:43.687646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.211 [2024-12-05 09:48:43.687660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.211 [2024-12-05 09:48:43.688791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.815 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.815 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:16:56.815 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:56.815 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:56.815 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:56.815 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.815 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:56.815 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.815 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:57.075 [2024-12-05 09:48:44.444903] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:57.075 [2024-12-05 09:48:44.461147] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:57.075 NULL1 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.075 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:57.075 [2024-12-05 09:48:44.550824] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:57.075 [2024-12-05 09:48:44.550981] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80581 ] 00:16:57.663 Attached to nqn.2016-06.io.spdk:cnode1 00:16:57.663 Namespace ID: 1 size: 1GB 00:16:57.663 fused_ordering(0) 00:16:57.663 fused_ordering(1) 00:16:57.663 fused_ordering(2) 00:16:57.663 fused_ordering(3) 00:16:57.663 fused_ordering(4) 00:16:57.663 fused_ordering(5) 00:16:57.663 fused_ordering(6) 00:16:57.663 fused_ordering(7) 00:16:57.663 fused_ordering(8) 00:16:57.663 fused_ordering(9) 00:16:57.663 fused_ordering(10) 00:16:57.663 fused_ordering(11) 00:16:57.663 fused_ordering(12) 00:16:57.663 fused_ordering(13) 00:16:57.663 fused_ordering(14) 00:16:57.663 fused_ordering(15) 00:16:57.663 fused_ordering(16) 00:16:57.663 fused_ordering(17) 00:16:57.663 fused_ordering(18) 00:16:57.663 fused_ordering(19) 00:16:57.663 fused_ordering(20) 00:16:57.663 fused_ordering(21) 00:16:57.663 fused_ordering(22) 00:16:57.663 fused_ordering(23) 00:16:57.663 fused_ordering(24) 00:16:57.663 fused_ordering(25) 00:16:57.663 fused_ordering(26) 00:16:57.663 fused_ordering(27) 00:16:57.664 fused_ordering(28) 00:16:57.664 fused_ordering(29) 00:16:57.664 fused_ordering(30) 00:16:57.664 fused_ordering(31) 00:16:57.664 fused_ordering(32) 00:16:57.664 fused_ordering(33) 00:16:57.664 fused_ordering(34) 00:16:57.664 fused_ordering(35) 00:16:57.664 fused_ordering(36) 00:16:57.664 fused_ordering(37) 00:16:57.664 fused_ordering(38) 00:16:57.664 fused_ordering(39) 00:16:57.664 fused_ordering(40) 00:16:57.664 fused_ordering(41) 00:16:57.664 fused_ordering(42) 00:16:57.664 fused_ordering(43) 00:16:57.664 fused_ordering(44) 00:16:57.664 fused_ordering(45) 00:16:57.664 fused_ordering(46) 00:16:57.664 fused_ordering(47) 00:16:57.664 fused_ordering(48) 00:16:57.664 fused_ordering(49) 00:16:57.664 fused_ordering(50) 00:16:57.664 fused_ordering(51) 00:16:57.664 fused_ordering(52) 00:16:57.664 fused_ordering(53) 00:16:57.664 fused_ordering(54) 00:16:57.664 fused_ordering(55) 00:16:57.664 fused_ordering(56) 00:16:57.664 fused_ordering(57) 00:16:57.664 fused_ordering(58) 00:16:57.664 fused_ordering(59) 00:16:57.664 fused_ordering(60) 00:16:57.664 fused_ordering(61) 00:16:57.664 fused_ordering(62) 00:16:57.664 fused_ordering(63) 00:16:57.664 fused_ordering(64) 00:16:57.664 fused_ordering(65) 00:16:57.664 fused_ordering(66) 00:16:57.664 fused_ordering(67) 00:16:57.664 fused_ordering(68) 00:16:57.664 fused_ordering(69) 00:16:57.664 fused_ordering(70) 00:16:57.664 fused_ordering(71) 00:16:57.664 fused_ordering(72) 00:16:57.664 fused_ordering(73) 00:16:57.664 fused_ordering(74) 00:16:57.664 fused_ordering(75) 00:16:57.664 fused_ordering(76) 00:16:57.664 fused_ordering(77) 00:16:57.664 fused_ordering(78) 00:16:57.664 fused_ordering(79) 00:16:57.664 fused_ordering(80) 00:16:57.664 fused_ordering(81) 00:16:57.664 fused_ordering(82) 00:16:57.664 fused_ordering(83) 00:16:57.665 fused_ordering(84) 00:16:57.665 fused_ordering(85) 00:16:57.665 fused_ordering(86) 00:16:57.665 fused_ordering(87) 00:16:57.665 fused_ordering(88) 00:16:57.665 fused_ordering(89) 00:16:57.665 fused_ordering(90) 00:16:57.665 fused_ordering(91) 00:16:57.665 fused_ordering(92) 00:16:57.665 fused_ordering(93) 00:16:57.665 fused_ordering(94) 00:16:57.665 fused_ordering(95) 00:16:57.665 fused_ordering(96) 00:16:57.665 fused_ordering(97) 00:16:57.665 fused_ordering(98) 00:16:57.665 fused_ordering(99) 00:16:57.665 fused_ordering(100) 00:16:57.665 fused_ordering(101) 00:16:57.665 fused_ordering(102) 00:16:57.665 fused_ordering(103) 00:16:57.665 fused_ordering(104) 00:16:57.665 fused_ordering(105) 00:16:57.665 fused_ordering(106) 00:16:57.665 fused_ordering(107) 00:16:57.665 fused_ordering(108) 00:16:57.665 fused_ordering(109) 00:16:57.665 fused_ordering(110) 00:16:57.665 fused_ordering(111) 00:16:57.665 fused_ordering(112) 00:16:57.665 fused_ordering(113) 00:16:57.665 fused_ordering(114) 00:16:57.665 fused_ordering(115) 00:16:57.665 fused_ordering(116) 00:16:57.665 fused_ordering(117) 00:16:57.665 fused_ordering(118) 00:16:57.665 fused_ordering(119) 00:16:57.665 fused_ordering(120) 00:16:57.665 fused_ordering(121) 00:16:57.665 fused_ordering(122) 00:16:57.665 fused_ordering(123) 00:16:57.665 fused_ordering(124) 00:16:57.665 fused_ordering(125) 00:16:57.665 fused_ordering(126) 00:16:57.665 fused_ordering(127) 00:16:57.665 fused_ordering(128) 00:16:57.665 fused_ordering(129) 00:16:57.665 fused_ordering(130) 00:16:57.665 fused_ordering(131) 00:16:57.665 fused_ordering(132) 00:16:57.665 fused_ordering(133) 00:16:57.665 fused_ordering(134) 00:16:57.665 fused_ordering(135) 00:16:57.665 fused_ordering(136) 00:16:57.665 fused_ordering(137) 00:16:57.665 fused_ordering(138) 00:16:57.665 fused_ordering(139) 00:16:57.665 fused_ordering(140) 00:16:57.665 fused_ordering(141) 00:16:57.665 fused_ordering(142) 00:16:57.665 fused_ordering(143) 00:16:57.665 fused_ordering(144) 00:16:57.665 fused_ordering(145) 00:16:57.665 fused_ordering(146) 00:16:57.665 fused_ordering(147) 00:16:57.665 fused_ordering(148) 00:16:57.665 fused_ordering(149) 00:16:57.665 fused_ordering(150) 00:16:57.665 fused_ordering(151) 00:16:57.665 fused_ordering(152) 00:16:57.665 fused_ordering(153) 00:16:57.665 fused_ordering(154) 00:16:57.665 fused_ordering(155) 00:16:57.665 fused_ordering(156) 00:16:57.665 fused_ordering(157) 00:16:57.665 fused_ordering(158) 00:16:57.665 fused_ordering(159) 00:16:57.665 fused_ordering(160) 00:16:57.665 fused_ordering(161) 00:16:57.665 fused_ordering(162) 00:16:57.665 fused_ordering(163) 00:16:57.665 fused_ordering(164) 00:16:57.665 fused_ordering(165) 00:16:57.665 fused_ordering(166) 00:16:57.665 fused_ordering(167) 00:16:57.665 fused_ordering(168) 00:16:57.665 fused_ordering(169) 00:16:57.666 fused_ordering(170) 00:16:57.666 fused_ordering(171) 00:16:57.666 fused_ordering(172) 00:16:57.666 fused_ordering(173) 00:16:57.666 fused_ordering(174) 00:16:57.666 fused_ordering(175) 00:16:57.666 fused_ordering(176) 00:16:57.666 fused_ordering(177) 00:16:57.666 fused_ordering(178) 00:16:57.666 fused_ordering(179) 00:16:57.666 fused_ordering(180) 00:16:57.666 fused_ordering(181) 00:16:57.666 fused_ordering(182) 00:16:57.666 fused_ordering(183) 00:16:57.666 fused_ordering(184) 00:16:57.666 fused_ordering(185) 00:16:57.666 fused_ordering(186) 00:16:57.666 fused_ordering(187) 00:16:57.666 fused_ordering(188) 00:16:57.666 fused_ordering(189) 00:16:57.666 fused_ordering(190) 00:16:57.666 fused_ordering(191) 00:16:57.666 fused_ordering(192) 00:16:57.666 fused_ordering(193) 00:16:57.666 fused_ordering(194) 00:16:57.666 fused_ordering(195) 00:16:57.666 fused_ordering(196) 00:16:57.666 fused_ordering(197) 00:16:57.666 fused_ordering(198) 00:16:57.666 fused_ordering(199) 00:16:57.666 fused_ordering(200) 00:16:57.666 fused_ordering(201) 00:16:57.666 fused_ordering(202) 00:16:57.666 fused_ordering(203) 00:16:57.666 fused_ordering(204) 00:16:57.666 fused_ordering(205) 00:16:57.929 fused_ordering(206) 00:16:57.929 fused_ordering(207) 00:16:57.929 fused_ordering(208) 00:16:57.929 fused_ordering(209) 00:16:57.929 fused_ordering(210) 00:16:57.929 fused_ordering(211) 00:16:57.929 fused_ordering(212) 00:16:57.929 fused_ordering(213) 00:16:57.929 fused_ordering(214) 00:16:57.929 fused_ordering(215) 00:16:57.929 fused_ordering(216) 00:16:57.929 fused_ordering(217) 00:16:57.929 fused_ordering(218) 00:16:57.929 fused_ordering(219) 00:16:57.929 fused_ordering(220) 00:16:57.929 fused_ordering(221) 00:16:57.929 fused_ordering(222) 00:16:57.929 fused_ordering(223) 00:16:57.929 fused_ordering(224) 00:16:57.929 fused_ordering(225) 00:16:57.929 fused_ordering(226) 00:16:57.929 fused_ordering(227) 00:16:57.929 fused_ordering(228) 00:16:57.929 fused_ordering(229) 00:16:57.929 fused_ordering(230) 00:16:57.929 fused_ordering(231) 00:16:57.929 fused_ordering(232) 00:16:57.929 fused_ordering(233) 00:16:57.929 fused_ordering(234) 00:16:57.929 fused_ordering(235) 00:16:57.929 fused_ordering(236) 00:16:57.929 fused_ordering(237) 00:16:57.929 fused_ordering(238) 00:16:57.929 fused_ordering(239) 00:16:57.929 fused_ordering(240) 00:16:57.929 fused_ordering(241) 00:16:57.929 fused_ordering(242) 00:16:57.929 fused_ordering(243) 00:16:57.929 fused_ordering(244) 00:16:57.929 fused_ordering(245) 00:16:57.929 fused_ordering(246) 00:16:57.929 fused_ordering(247) 00:16:57.929 fused_ordering(248) 00:16:57.929 fused_ordering(249) 00:16:57.929 fused_ordering(250) 00:16:57.929 fused_ordering(251) 00:16:57.929 fused_ordering(252) 00:16:57.929 fused_ordering(253) 00:16:57.929 fused_ordering(254) 00:16:57.929 fused_ordering(255) 00:16:57.929 fused_ordering(256) 00:16:57.929 fused_ordering(257) 00:16:57.929 fused_ordering(258) 00:16:57.929 fused_ordering(259) 00:16:57.929 fused_ordering(260) 00:16:57.929 fused_ordering(261) 00:16:57.929 fused_ordering(262) 00:16:57.929 fused_ordering(263) 00:16:57.929 fused_ordering(264) 00:16:57.929 fused_ordering(265) 00:16:57.929 fused_ordering(266) 00:16:57.929 fused_ordering(267) 00:16:57.929 fused_ordering(268) 00:16:57.929 fused_ordering(269) 00:16:57.929 fused_ordering(270) 00:16:57.929 fused_ordering(271) 00:16:57.929 fused_ordering(272) 00:16:57.929 fused_ordering(273) 00:16:57.929 fused_ordering(274) 00:16:57.929 fused_ordering(275) 00:16:57.929 fused_ordering(276) 00:16:57.929 fused_ordering(277) 00:16:57.929 fused_ordering(278) 00:16:57.929 fused_ordering(279) 00:16:57.929 fused_ordering(280) 00:16:57.929 fused_ordering(281) 00:16:57.929 fused_ordering(282) 00:16:57.929 fused_ordering(283) 00:16:57.929 fused_ordering(284) 00:16:57.929 fused_ordering(285) 00:16:57.929 fused_ordering(286) 00:16:57.929 fused_ordering(287) 00:16:57.929 fused_ordering(288) 00:16:57.929 fused_ordering(289) 00:16:57.929 fused_ordering(290) 00:16:57.929 fused_ordering(291) 00:16:57.929 fused_ordering(292) 00:16:57.929 fused_ordering(293) 00:16:57.929 fused_ordering(294) 00:16:57.929 fused_ordering(295) 00:16:57.929 fused_ordering(296) 00:16:57.929 fused_ordering(297) 00:16:57.929 fused_ordering(298) 00:16:57.929 fused_ordering(299) 00:16:57.929 fused_ordering(300) 00:16:57.929 fused_ordering(301) 00:16:57.929 fused_ordering(302) 00:16:57.929 fused_ordering(303) 00:16:57.929 fused_ordering(304) 00:16:57.929 fused_ordering(305) 00:16:57.929 fused_ordering(306) 00:16:57.929 fused_ordering(307) 00:16:57.929 fused_ordering(308) 00:16:57.929 fused_ordering(309) 00:16:57.929 fused_ordering(310) 00:16:57.929 fused_ordering(311) 00:16:57.929 fused_ordering(312) 00:16:57.929 fused_ordering(313) 00:16:57.929 fused_ordering(314) 00:16:57.929 fused_ordering(315) 00:16:57.929 fused_ordering(316) 00:16:57.929 fused_ordering(317) 00:16:57.929 fused_ordering(318) 00:16:57.929 fused_ordering(319) 00:16:57.929 fused_ordering(320) 00:16:57.929 fused_ordering(321) 00:16:57.929 fused_ordering(322) 00:16:57.929 fused_ordering(323) 00:16:57.929 fused_ordering(324) 00:16:57.929 fused_ordering(325) 00:16:57.929 fused_ordering(326) 00:16:57.929 fused_ordering(327) 00:16:57.929 fused_ordering(328) 00:16:57.929 fused_ordering(329) 00:16:57.929 fused_ordering(330) 00:16:57.929 fused_ordering(331) 00:16:57.929 fused_ordering(332) 00:16:57.929 fused_ordering(333) 00:16:57.929 fused_ordering(334) 00:16:57.929 fused_ordering(335) 00:16:57.929 fused_ordering(336) 00:16:57.929 fused_ordering(337) 00:16:57.929 fused_ordering(338) 00:16:57.929 fused_ordering(339) 00:16:57.929 fused_ordering(340) 00:16:57.929 fused_ordering(341) 00:16:57.929 fused_ordering(342) 00:16:57.929 fused_ordering(343) 00:16:57.929 fused_ordering(344) 00:16:57.929 fused_ordering(345) 00:16:57.929 fused_ordering(346) 00:16:57.929 fused_ordering(347) 00:16:57.929 fused_ordering(348) 00:16:57.929 fused_ordering(349) 00:16:57.929 fused_ordering(350) 00:16:57.929 fused_ordering(351) 00:16:57.929 fused_ordering(352) 00:16:57.929 fused_ordering(353) 00:16:57.929 fused_ordering(354) 00:16:57.929 fused_ordering(355) 00:16:57.929 fused_ordering(356) 00:16:57.929 fused_ordering(357) 00:16:57.929 fused_ordering(358) 00:16:57.929 fused_ordering(359) 00:16:57.929 fused_ordering(360) 00:16:57.929 fused_ordering(361) 00:16:57.929 fused_ordering(362) 00:16:57.929 fused_ordering(363) 00:16:57.929 fused_ordering(364) 00:16:57.929 fused_ordering(365) 00:16:57.929 fused_ordering(366) 00:16:57.929 fused_ordering(367) 00:16:57.929 fused_ordering(368) 00:16:57.929 fused_ordering(369) 00:16:57.929 fused_ordering(370) 00:16:57.929 fused_ordering(371) 00:16:57.929 fused_ordering(372) 00:16:57.929 fused_ordering(373) 00:16:57.929 fused_ordering(374) 00:16:57.929 fused_ordering(375) 00:16:57.929 fused_ordering(376) 00:16:57.929 fused_ordering(377) 00:16:57.929 fused_ordering(378) 00:16:57.929 fused_ordering(379) 00:16:57.929 fused_ordering(380) 00:16:57.929 fused_ordering(381) 00:16:57.929 fused_ordering(382) 00:16:57.929 fused_ordering(383) 00:16:57.929 fused_ordering(384) 00:16:57.929 fused_ordering(385) 00:16:57.929 fused_ordering(386) 00:16:57.929 fused_ordering(387) 00:16:57.929 fused_ordering(388) 00:16:57.929 fused_ordering(389) 00:16:57.929 fused_ordering(390) 00:16:57.929 fused_ordering(391) 00:16:57.929 fused_ordering(392) 00:16:57.929 fused_ordering(393) 00:16:57.929 fused_ordering(394) 00:16:57.929 fused_ordering(395) 00:16:57.929 fused_ordering(396) 00:16:57.929 fused_ordering(397) 00:16:57.929 fused_ordering(398) 00:16:57.929 fused_ordering(399) 00:16:57.929 fused_ordering(400) 00:16:57.929 fused_ordering(401) 00:16:57.929 fused_ordering(402) 00:16:57.929 fused_ordering(403) 00:16:57.929 fused_ordering(404) 00:16:57.929 fused_ordering(405) 00:16:57.929 fused_ordering(406) 00:16:57.929 fused_ordering(407) 00:16:57.929 fused_ordering(408) 00:16:57.930 fused_ordering(409) 00:16:57.930 fused_ordering(410) 00:16:58.187 fused_ordering(411) 00:16:58.187 fused_ordering(412) 00:16:58.187 fused_ordering(413) 00:16:58.187 fused_ordering(414) 00:16:58.187 fused_ordering(415) 00:16:58.187 fused_ordering(416) 00:16:58.187 fused_ordering(417) 00:16:58.187 fused_ordering(418) 00:16:58.187 fused_ordering(419) 00:16:58.187 fused_ordering(420) 00:16:58.187 fused_ordering(421) 00:16:58.187 fused_ordering(422) 00:16:58.187 fused_ordering(423) 00:16:58.187 fused_ordering(424) 00:16:58.187 fused_ordering(425) 00:16:58.187 fused_ordering(426) 00:16:58.187 fused_ordering(427) 00:16:58.187 fused_ordering(428) 00:16:58.187 fused_ordering(429) 00:16:58.187 fused_ordering(430) 00:16:58.187 fused_ordering(431) 00:16:58.187 fused_ordering(432) 00:16:58.187 fused_ordering(433) 00:16:58.187 fused_ordering(434) 00:16:58.187 fused_ordering(435) 00:16:58.187 fused_ordering(436) 00:16:58.187 fused_ordering(437) 00:16:58.187 fused_ordering(438) 00:16:58.187 fused_ordering(439) 00:16:58.187 fused_ordering(440) 00:16:58.187 fused_ordering(441) 00:16:58.187 fused_ordering(442) 00:16:58.187 fused_ordering(443) 00:16:58.187 fused_ordering(444) 00:16:58.187 fused_ordering(445) 00:16:58.187 fused_ordering(446) 00:16:58.187 fused_ordering(447) 00:16:58.187 fused_ordering(448) 00:16:58.187 fused_ordering(449) 00:16:58.187 fused_ordering(450) 00:16:58.187 fused_ordering(451) 00:16:58.187 fused_ordering(452) 00:16:58.187 fused_ordering(453) 00:16:58.187 fused_ordering(454) 00:16:58.187 fused_ordering(455) 00:16:58.187 fused_ordering(456) 00:16:58.188 fused_ordering(457) 00:16:58.188 fused_ordering(458) 00:16:58.188 fused_ordering(459) 00:16:58.188 fused_ordering(460) 00:16:58.188 fused_ordering(461) 00:16:58.188 fused_ordering(462) 00:16:58.188 fused_ordering(463) 00:16:58.188 fused_ordering(464) 00:16:58.188 fused_ordering(465) 00:16:58.188 fused_ordering(466) 00:16:58.188 fused_ordering(467) 00:16:58.188 fused_ordering(468) 00:16:58.188 fused_ordering(469) 00:16:58.188 fused_ordering(470) 00:16:58.188 fused_ordering(471) 00:16:58.188 fused_ordering(472) 00:16:58.188 fused_ordering(473) 00:16:58.188 fused_ordering(474) 00:16:58.188 fused_ordering(475) 00:16:58.188 fused_ordering(476) 00:16:58.188 fused_ordering(477) 00:16:58.188 fused_ordering(478) 00:16:58.188 fused_ordering(479) 00:16:58.188 fused_ordering(480) 00:16:58.188 fused_ordering(481) 00:16:58.188 fused_ordering(482) 00:16:58.188 fused_ordering(483) 00:16:58.188 fused_ordering(484) 00:16:58.188 fused_ordering(485) 00:16:58.188 fused_ordering(486) 00:16:58.188 fused_ordering(487) 00:16:58.188 fused_ordering(488) 00:16:58.188 fused_ordering(489) 00:16:58.188 fused_ordering(490) 00:16:58.188 fused_ordering(491) 00:16:58.188 fused_ordering(492) 00:16:58.188 fused_ordering(493) 00:16:58.188 fused_ordering(494) 00:16:58.188 fused_ordering(495) 00:16:58.188 fused_ordering(496) 00:16:58.188 fused_ordering(497) 00:16:58.188 fused_ordering(498) 00:16:58.188 fused_ordering(499) 00:16:58.188 fused_ordering(500) 00:16:58.188 fused_ordering(501) 00:16:58.188 fused_ordering(502) 00:16:58.188 fused_ordering(503) 00:16:58.188 fused_ordering(504) 00:16:58.188 fused_ordering(505) 00:16:58.188 fused_ordering(506) 00:16:58.188 fused_ordering(507) 00:16:58.188 fused_ordering(508) 00:16:58.188 fused_ordering(509) 00:16:58.188 fused_ordering(510) 00:16:58.188 fused_ordering(511) 00:16:58.188 fused_ordering(512) 00:16:58.188 fused_ordering(513) 00:16:58.188 fused_ordering(514) 00:16:58.188 fused_ordering(515) 00:16:58.188 fused_ordering(516) 00:16:58.188 fused_ordering(517) 00:16:58.188 fused_ordering(518) 00:16:58.188 fused_ordering(519) 00:16:58.188 fused_ordering(520) 00:16:58.188 fused_ordering(521) 00:16:58.188 fused_ordering(522) 00:16:58.188 fused_ordering(523) 00:16:58.188 fused_ordering(524) 00:16:58.188 fused_ordering(525) 00:16:58.188 fused_ordering(526) 00:16:58.188 fused_ordering(527) 00:16:58.188 fused_ordering(528) 00:16:58.188 fused_ordering(529) 00:16:58.188 fused_ordering(530) 00:16:58.188 fused_ordering(531) 00:16:58.188 fused_ordering(532) 00:16:58.188 fused_ordering(533) 00:16:58.188 fused_ordering(534) 00:16:58.188 fused_ordering(535) 00:16:58.188 fused_ordering(536) 00:16:58.188 fused_ordering(537) 00:16:58.188 fused_ordering(538) 00:16:58.188 fused_ordering(539) 00:16:58.188 fused_ordering(540) 00:16:58.188 fused_ordering(541) 00:16:58.188 fused_ordering(542) 00:16:58.188 fused_ordering(543) 00:16:58.188 fused_ordering(544) 00:16:58.188 fused_ordering(545) 00:16:58.188 fused_ordering(546) 00:16:58.188 fused_ordering(547) 00:16:58.188 fused_ordering(548) 00:16:58.188 fused_ordering(549) 00:16:58.188 fused_ordering(550) 00:16:58.188 fused_ordering(551) 00:16:58.188 fused_ordering(552) 00:16:58.188 fused_ordering(553) 00:16:58.188 fused_ordering(554) 00:16:58.188 fused_ordering(555) 00:16:58.188 fused_ordering(556) 00:16:58.188 fused_ordering(557) 00:16:58.188 fused_ordering(558) 00:16:58.188 fused_ordering(559) 00:16:58.188 fused_ordering(560) 00:16:58.188 fused_ordering(561) 00:16:58.188 fused_ordering(562) 00:16:58.188 fused_ordering(563) 00:16:58.188 fused_ordering(564) 00:16:58.188 fused_ordering(565) 00:16:58.188 fused_ordering(566) 00:16:58.188 fused_ordering(567) 00:16:58.188 fused_ordering(568) 00:16:58.188 fused_ordering(569) 00:16:58.188 fused_ordering(570) 00:16:58.188 fused_ordering(571) 00:16:58.188 fused_ordering(572) 00:16:58.188 fused_ordering(573) 00:16:58.188 fused_ordering(574) 00:16:58.188 fused_ordering(575) 00:16:58.188 fused_ordering(576) 00:16:58.188 fused_ordering(577) 00:16:58.188 fused_ordering(578) 00:16:58.188 fused_ordering(579) 00:16:58.188 fused_ordering(580) 00:16:58.188 fused_ordering(581) 00:16:58.188 fused_ordering(582) 00:16:58.188 fused_ordering(583) 00:16:58.188 fused_ordering(584) 00:16:58.188 fused_ordering(585) 00:16:58.188 fused_ordering(586) 00:16:58.188 fused_ordering(587) 00:16:58.188 fused_ordering(588) 00:16:58.188 fused_ordering(589) 00:16:58.188 fused_ordering(590) 00:16:58.188 fused_ordering(591) 00:16:58.188 fused_ordering(592) 00:16:58.188 fused_ordering(593) 00:16:58.188 fused_ordering(594) 00:16:58.188 fused_ordering(595) 00:16:58.188 fused_ordering(596) 00:16:58.188 fused_ordering(597) 00:16:58.188 fused_ordering(598) 00:16:58.188 fused_ordering(599) 00:16:58.188 fused_ordering(600) 00:16:58.188 fused_ordering(601) 00:16:58.188 fused_ordering(602) 00:16:58.188 fused_ordering(603) 00:16:58.188 fused_ordering(604) 00:16:58.188 fused_ordering(605) 00:16:58.188 fused_ordering(606) 00:16:58.188 fused_ordering(607) 00:16:58.188 fused_ordering(608) 00:16:58.188 fused_ordering(609) 00:16:58.188 fused_ordering(610) 00:16:58.188 fused_ordering(611) 00:16:58.188 fused_ordering(612) 00:16:58.188 fused_ordering(613) 00:16:58.188 fused_ordering(614) 00:16:58.188 fused_ordering(615) 00:16:58.754 fused_ordering(616) 00:16:58.754 fused_ordering(617) 00:16:58.754 fused_ordering(618) 00:16:58.754 fused_ordering(619) 00:16:58.754 fused_ordering(620) 00:16:58.754 fused_ordering(621) 00:16:58.754 fused_ordering(622) 00:16:58.754 fused_ordering(623) 00:16:58.754 fused_ordering(624) 00:16:58.754 fused_ordering(625) 00:16:58.754 fused_ordering(626) 00:16:58.754 fused_ordering(627) 00:16:58.754 fused_ordering(628) 00:16:58.754 fused_ordering(629) 00:16:58.754 fused_ordering(630) 00:16:58.754 fused_ordering(631) 00:16:58.754 fused_ordering(632) 00:16:58.754 fused_ordering(633) 00:16:58.754 fused_ordering(634) 00:16:58.754 fused_ordering(635) 00:16:58.754 fused_ordering(636) 00:16:58.754 fused_ordering(637) 00:16:58.754 fused_ordering(638) 00:16:58.754 fused_ordering(639) 00:16:58.754 fused_ordering(640) 00:16:58.754 fused_ordering(641) 00:16:58.754 fused_ordering(642) 00:16:58.754 fused_ordering(643) 00:16:58.754 fused_ordering(644) 00:16:58.754 fused_ordering(645) 00:16:58.754 fused_ordering(646) 00:16:58.754 fused_ordering(647) 00:16:58.754 fused_ordering(648) 00:16:58.754 fused_ordering(649) 00:16:58.754 fused_ordering(650) 00:16:58.754 fused_ordering(651) 00:16:58.754 fused_ordering(652) 00:16:58.754 fused_ordering(653) 00:16:58.754 fused_ordering(654) 00:16:58.754 fused_ordering(655) 00:16:58.754 fused_ordering(656) 00:16:58.754 fused_ordering(657) 00:16:58.754 fused_ordering(658) 00:16:58.754 fused_ordering(659) 00:16:58.754 fused_ordering(660) 00:16:58.754 fused_ordering(661) 00:16:58.754 fused_ordering(662) 00:16:58.754 fused_ordering(663) 00:16:58.755 fused_ordering(664) 00:16:58.755 fused_ordering(665) 00:16:58.755 fused_ordering(666) 00:16:58.755 fused_ordering(667) 00:16:58.755 fused_ordering(668) 00:16:58.755 fused_ordering(669) 00:16:58.755 fused_ordering(670) 00:16:58.755 fused_ordering(671) 00:16:58.755 fused_ordering(672) 00:16:58.755 fused_ordering(673) 00:16:58.755 fused_ordering(674) 00:16:58.755 fused_ordering(675) 00:16:58.755 fused_ordering(676) 00:16:58.755 fused_ordering(677) 00:16:58.755 fused_ordering(678) 00:16:58.755 fused_ordering(679) 00:16:58.755 fused_ordering(680) 00:16:58.755 fused_ordering(681) 00:16:58.755 fused_ordering(682) 00:16:58.755 fused_ordering(683) 00:16:58.755 fused_ordering(684) 00:16:58.755 fused_ordering(685) 00:16:58.755 fused_ordering(686) 00:16:58.755 fused_ordering(687) 00:16:58.755 fused_ordering(688) 00:16:58.755 fused_ordering(689) 00:16:58.755 fused_ordering(690) 00:16:58.755 fused_ordering(691) 00:16:58.755 fused_ordering(692) 00:16:58.755 fused_ordering(693) 00:16:58.755 fused_ordering(694) 00:16:58.755 fused_ordering(695) 00:16:58.755 fused_ordering(696) 00:16:58.755 fused_ordering(697) 00:16:58.755 fused_ordering(698) 00:16:58.755 fused_ordering(699) 00:16:58.755 fused_ordering(700) 00:16:58.755 fused_ordering(701) 00:16:58.755 fused_ordering(702) 00:16:58.755 fused_ordering(703) 00:16:58.755 fused_ordering(704) 00:16:58.755 fused_ordering(705) 00:16:58.755 fused_ordering(706) 00:16:58.755 fused_ordering(707) 00:16:58.755 fused_ordering(708) 00:16:58.755 fused_ordering(709) 00:16:58.755 fused_ordering(710) 00:16:58.755 fused_ordering(711) 00:16:58.755 fused_ordering(712) 00:16:58.755 fused_ordering(713) 00:16:58.755 fused_ordering(714) 00:16:58.755 fused_ordering(715) 00:16:58.755 fused_ordering(716) 00:16:58.755 fused_ordering(717) 00:16:58.755 fused_ordering(718) 00:16:58.755 fused_ordering(719) 00:16:58.755 fused_ordering(720) 00:16:58.755 fused_ordering(721) 00:16:58.755 fused_ordering(722) 00:16:58.755 fused_ordering(723) 00:16:58.755 fused_ordering(724) 00:16:58.755 fused_ordering(725) 00:16:58.755 fused_ordering(726) 00:16:58.755 fused_ordering(727) 00:16:58.755 fused_ordering(728) 00:16:58.755 fused_ordering(729) 00:16:58.755 fused_ordering(730) 00:16:58.755 fused_ordering(731) 00:16:58.755 fused_ordering(732) 00:16:58.755 fused_ordering(733) 00:16:58.755 fused_ordering(734) 00:16:58.755 fused_ordering(735) 00:16:58.755 fused_ordering(736) 00:16:58.755 fused_ordering(737) 00:16:58.755 fused_ordering(738) 00:16:58.755 fused_ordering(739) 00:16:58.755 fused_ordering(740) 00:16:58.755 fused_ordering(741) 00:16:58.755 fused_ordering(742) 00:16:58.755 fused_ordering(743) 00:16:58.755 fused_ordering(744) 00:16:58.755 fused_ordering(745) 00:16:58.755 fused_ordering(746) 00:16:58.755 fused_ordering(747) 00:16:58.755 fused_ordering(748) 00:16:58.755 fused_ordering(749) 00:16:58.755 fused_ordering(750) 00:16:58.755 fused_ordering(751) 00:16:58.755 fused_ordering(752) 00:16:58.755 fused_ordering(753) 00:16:58.755 fused_ordering(754) 00:16:58.755 fused_ordering(755) 00:16:58.755 fused_ordering(756) 00:16:58.755 fused_ordering(757) 00:16:58.755 fused_ordering(758) 00:16:58.755 fused_ordering(759) 00:16:58.755 fused_ordering(760) 00:16:58.755 fused_ordering(761) 00:16:58.755 fused_ordering(762) 00:16:58.755 fused_ordering(763) 00:16:58.755 fused_ordering(764) 00:16:58.755 fused_ordering(765) 00:16:58.755 fused_ordering(766) 00:16:58.755 fused_ordering(767) 00:16:58.755 fused_ordering(768) 00:16:58.755 fused_ordering(769) 00:16:58.755 fused_ordering(770) 00:16:58.755 fused_ordering(771) 00:16:58.755 fused_ordering(772) 00:16:58.755 fused_ordering(773) 00:16:58.755 fused_ordering(774) 00:16:58.755 fused_ordering(775) 00:16:58.755 fused_ordering(776) 00:16:58.755 fused_ordering(777) 00:16:58.755 fused_ordering(778) 00:16:58.755 fused_ordering(779) 00:16:58.755 fused_ordering(780) 00:16:58.755 fused_ordering(781) 00:16:58.755 fused_ordering(782) 00:16:58.755 fused_ordering(783) 00:16:58.755 fused_ordering(784) 00:16:58.755 fused_ordering(785) 00:16:58.755 fused_ordering(786) 00:16:58.755 fused_ordering(787) 00:16:58.755 fused_ordering(788) 00:16:58.755 fused_ordering(789) 00:16:58.755 fused_ordering(790) 00:16:58.755 fused_ordering(791) 00:16:58.755 fused_ordering(792) 00:16:58.755 fused_ordering(793) 00:16:58.755 fused_ordering(794) 00:16:58.755 fused_ordering(795) 00:16:58.755 fused_ordering(796) 00:16:58.755 fused_ordering(797) 00:16:58.755 fused_ordering(798) 00:16:58.755 fused_ordering(799) 00:16:58.755 fused_ordering(800) 00:16:58.755 fused_ordering(801) 00:16:58.755 fused_ordering(802) 00:16:58.755 fused_ordering(803) 00:16:58.755 fused_ordering(804) 00:16:58.755 fused_ordering(805) 00:16:58.755 fused_ordering(806) 00:16:58.755 fused_ordering(807) 00:16:58.755 fused_ordering(808) 00:16:58.755 fused_ordering(809) 00:16:58.755 fused_ordering(810) 00:16:58.755 fused_ordering(811) 00:16:58.755 fused_ordering(812) 00:16:58.755 fused_ordering(813) 00:16:58.755 fused_ordering(814) 00:16:58.755 fused_ordering(815) 00:16:58.755 fused_ordering(816) 00:16:58.755 fused_ordering(817) 00:16:58.755 fused_ordering(818) 00:16:58.755 fused_ordering(819) 00:16:58.755 fused_ordering(820) 00:16:59.324 fused_ordering(821) 00:16:59.324 fused_ordering(822) 00:16:59.324 fused_ordering(823) 00:16:59.324 fused_ordering(824) 00:16:59.324 fused_ordering(825) 00:16:59.324 fused_ordering(826) 00:16:59.324 fused_ordering(827) 00:16:59.324 fused_ordering(828) 00:16:59.324 fused_ordering(829) 00:16:59.324 fused_ordering(830) 00:16:59.324 fused_ordering(831) 00:16:59.324 fused_ordering(832) 00:16:59.324 fused_ordering(833) 00:16:59.324 fused_ordering(834) 00:16:59.324 fused_ordering(835) 00:16:59.324 fused_ordering(836) 00:16:59.324 fused_ordering(837) 00:16:59.324 fused_ordering(838) 00:16:59.324 fused_ordering(839) 00:16:59.324 fused_ordering(840) 00:16:59.324 fused_ordering(841) 00:16:59.324 fused_ordering(842) 00:16:59.324 fused_ordering(843) 00:16:59.324 fused_ordering(844) 00:16:59.324 fused_ordering(845) 00:16:59.324 fused_ordering(846) 00:16:59.324 fused_ordering(847) 00:16:59.324 fused_ordering(848) 00:16:59.324 fused_ordering(849) 00:16:59.324 fused_ordering(850) 00:16:59.324 fused_ordering(851) 00:16:59.324 fused_ordering(852) 00:16:59.324 fused_ordering(853) 00:16:59.324 fused_ordering(854) 00:16:59.324 fused_ordering(855) 00:16:59.324 fused_ordering(856) 00:16:59.324 fused_ordering(857) 00:16:59.324 fused_ordering(858) 00:16:59.324 fused_ordering(859) 00:16:59.324 fused_ordering(860) 00:16:59.324 fused_ordering(861) 00:16:59.324 fused_ordering(862) 00:16:59.324 fused_ordering(863) 00:16:59.324 fused_ordering(864) 00:16:59.324 fused_ordering(865) 00:16:59.324 fused_ordering(866) 00:16:59.324 fused_ordering(867) 00:16:59.324 fused_ordering(868) 00:16:59.324 fused_ordering(869) 00:16:59.324 fused_ordering(870) 00:16:59.324 fused_ordering(871) 00:16:59.324 fused_ordering(872) 00:16:59.324 fused_ordering(873) 00:16:59.324 fused_ordering(874) 00:16:59.324 fused_ordering(875) 00:16:59.324 fused_ordering(876) 00:16:59.324 fused_ordering(877) 00:16:59.324 fused_ordering(878) 00:16:59.324 fused_ordering(879) 00:16:59.324 fused_ordering(880) 00:16:59.324 fused_ordering(881) 00:16:59.324 fused_ordering(882) 00:16:59.324 fused_ordering(883) 00:16:59.324 fused_ordering(884) 00:16:59.324 fused_ordering(885) 00:16:59.324 fused_ordering(886) 00:16:59.324 fused_ordering(887) 00:16:59.324 fused_ordering(888) 00:16:59.324 fused_ordering(889) 00:16:59.324 fused_ordering(890) 00:16:59.324 fused_ordering(891) 00:16:59.324 fused_ordering(892) 00:16:59.324 fused_ordering(893) 00:16:59.324 fused_ordering(894) 00:16:59.324 fused_ordering(895) 00:16:59.324 fused_ordering(896) 00:16:59.324 fused_ordering(897) 00:16:59.324 fused_ordering(898) 00:16:59.324 fused_ordering(899) 00:16:59.324 fused_ordering(900) 00:16:59.324 fused_ordering(901) 00:16:59.324 fused_ordering(902) 00:16:59.324 fused_ordering(903) 00:16:59.324 fused_ordering(904) 00:16:59.324 fused_ordering(905) 00:16:59.324 fused_ordering(906) 00:16:59.324 fused_ordering(907) 00:16:59.324 fused_ordering(908) 00:16:59.324 fused_ordering(909) 00:16:59.324 fused_ordering(910) 00:16:59.324 fused_ordering(911) 00:16:59.324 fused_ordering(912) 00:16:59.324 fused_ordering(913) 00:16:59.324 fused_ordering(914) 00:16:59.324 fused_ordering(915) 00:16:59.324 fused_ordering(916) 00:16:59.324 fused_ordering(917) 00:16:59.324 fused_ordering(918) 00:16:59.324 fused_ordering(919) 00:16:59.324 fused_ordering(920) 00:16:59.324 fused_ordering(921) 00:16:59.324 fused_ordering(922) 00:16:59.324 fused_ordering(923) 00:16:59.324 fused_ordering(924) 00:16:59.324 fused_ordering(925) 00:16:59.324 fused_ordering(926) 00:16:59.324 fused_ordering(927) 00:16:59.324 fused_ordering(928) 00:16:59.324 fused_ordering(929) 00:16:59.324 fused_ordering(930) 00:16:59.324 fused_ordering(931) 00:16:59.324 fused_ordering(932) 00:16:59.324 fused_ordering(933) 00:16:59.324 fused_ordering(934) 00:16:59.324 fused_ordering(935) 00:16:59.324 fused_ordering(936) 00:16:59.324 fused_ordering(937) 00:16:59.324 fused_ordering(938) 00:16:59.324 fused_ordering(939) 00:16:59.324 fused_ordering(940) 00:16:59.324 fused_ordering(941) 00:16:59.324 fused_ordering(942) 00:16:59.324 fused_ordering(943) 00:16:59.324 fused_ordering(944) 00:16:59.324 fused_ordering(945) 00:16:59.324 fused_ordering(946) 00:16:59.324 fused_ordering(947) 00:16:59.324 fused_ordering(948) 00:16:59.324 fused_ordering(949) 00:16:59.324 fused_ordering(950) 00:16:59.324 fused_ordering(951) 00:16:59.324 fused_ordering(952) 00:16:59.324 fused_ordering(953) 00:16:59.324 fused_ordering(954) 00:16:59.324 fused_ordering(955) 00:16:59.324 fused_ordering(956) 00:16:59.324 fused_ordering(957) 00:16:59.324 fused_ordering(958) 00:16:59.324 fused_ordering(959) 00:16:59.325 fused_ordering(960) 00:16:59.325 fused_ordering(961) 00:16:59.325 fused_ordering(962) 00:16:59.325 fused_ordering(963) 00:16:59.325 fused_ordering(964) 00:16:59.325 fused_ordering(965) 00:16:59.325 fused_ordering(966) 00:16:59.325 fused_ordering(967) 00:16:59.325 fused_ordering(968) 00:16:59.325 fused_ordering(969) 00:16:59.325 fused_ordering(970) 00:16:59.325 fused_ordering(971) 00:16:59.325 fused_ordering(972) 00:16:59.325 fused_ordering(973) 00:16:59.325 fused_ordering(974) 00:16:59.325 fused_ordering(975) 00:16:59.325 fused_ordering(976) 00:16:59.325 fused_ordering(977) 00:16:59.325 fused_ordering(978) 00:16:59.325 fused_ordering(979) 00:16:59.325 fused_ordering(980) 00:16:59.325 fused_ordering(981) 00:16:59.325 fused_ordering(982) 00:16:59.325 fused_ordering(983) 00:16:59.325 fused_ordering(984) 00:16:59.325 fused_ordering(985) 00:16:59.325 fused_ordering(986) 00:16:59.325 fused_ordering(987) 00:16:59.325 fused_ordering(988) 00:16:59.325 fused_ordering(989) 00:16:59.325 fused_ordering(990) 00:16:59.325 fused_ordering(991) 00:16:59.325 fused_ordering(992) 00:16:59.325 fused_ordering(993) 00:16:59.325 fused_ordering(994) 00:16:59.325 fused_ordering(995) 00:16:59.325 fused_ordering(996) 00:16:59.325 fused_ordering(997) 00:16:59.325 fused_ordering(998) 00:16:59.325 fused_ordering(999) 00:16:59.325 fused_ordering(1000) 00:16:59.325 fused_ordering(1001) 00:16:59.325 fused_ordering(1002) 00:16:59.325 fused_ordering(1003) 00:16:59.325 fused_ordering(1004) 00:16:59.325 fused_ordering(1005) 00:16:59.325 fused_ordering(1006) 00:16:59.325 fused_ordering(1007) 00:16:59.325 fused_ordering(1008) 00:16:59.325 fused_ordering(1009) 00:16:59.325 fused_ordering(1010) 00:16:59.325 fused_ordering(1011) 00:16:59.325 fused_ordering(1012) 00:16:59.325 fused_ordering(1013) 00:16:59.325 fused_ordering(1014) 00:16:59.325 fused_ordering(1015) 00:16:59.325 fused_ordering(1016) 00:16:59.325 fused_ordering(1017) 00:16:59.325 fused_ordering(1018) 00:16:59.325 fused_ordering(1019) 00:16:59.325 fused_ordering(1020) 00:16:59.325 fused_ordering(1021) 00:16:59.325 fused_ordering(1022) 00:16:59.325 fused_ordering(1023) 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:59.325 rmmod nvme_tcp 00:16:59.325 rmmod nvme_fabrics 00:16:59.325 rmmod nvme_keyring 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 80530 ']' 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 80530 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 80530 ']' 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 80530 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80530 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:59.325 killing process with pid 80530 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80530' 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 80530 00:16:59.325 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 80530 00:17:00.263 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:00.263 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:00.263 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:00.263 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:00.263 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:00.263 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:00.263 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:00.263 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:00.263 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:00.263 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:00.263 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:00.263 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:00.522 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:00.522 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:00.522 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:00.522 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:00.522 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:00.522 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:00.522 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:00.522 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:00.522 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:00.522 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:00.522 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:00.522 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.522 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.522 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.522 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:17:00.522 00:17:00.522 real 0m5.427s 00:17:00.522 user 0m6.097s 00:17:00.522 sys 0m1.635s 00:17:00.522 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.522 ************************************ 00:17:00.522 END TEST nvmf_fused_ordering 00:17:00.522 ************************************ 00:17:00.522 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:00.522 09:48:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:00.522 09:48:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:00.522 09:48:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.522 09:48:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:00.522 ************************************ 00:17:00.522 START TEST nvmf_ns_masking 00:17:00.522 ************************************ 00:17:00.522 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:00.782 * Looking for test storage... 00:17:00.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:00.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.782 --rc genhtml_branch_coverage=1 00:17:00.782 --rc genhtml_function_coverage=1 00:17:00.782 --rc genhtml_legend=1 00:17:00.782 --rc geninfo_all_blocks=1 00:17:00.782 --rc geninfo_unexecuted_blocks=1 00:17:00.782 00:17:00.782 ' 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:00.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.782 --rc genhtml_branch_coverage=1 00:17:00.782 --rc genhtml_function_coverage=1 00:17:00.782 --rc genhtml_legend=1 00:17:00.782 --rc geninfo_all_blocks=1 00:17:00.782 --rc geninfo_unexecuted_blocks=1 00:17:00.782 00:17:00.782 ' 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:00.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.782 --rc genhtml_branch_coverage=1 00:17:00.782 --rc genhtml_function_coverage=1 00:17:00.782 --rc genhtml_legend=1 00:17:00.782 --rc geninfo_all_blocks=1 00:17:00.782 --rc geninfo_unexecuted_blocks=1 00:17:00.782 00:17:00.782 ' 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:00.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.782 --rc genhtml_branch_coverage=1 00:17:00.782 --rc genhtml_function_coverage=1 00:17:00.782 --rc genhtml_legend=1 00:17:00.782 --rc geninfo_all_blocks=1 00:17:00.782 --rc geninfo_unexecuted_blocks=1 00:17:00.782 00:17:00.782 ' 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.782 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:00.783 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=73c7b5b7-9850-412c-acf3-56d6559d5f9b 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=d170579d-b594-4538-9510-3aa833a4f610 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0e1631f9-e797-4a3e-b017-41fc92f55bfc 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:00.783 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:01.043 Cannot find device "nvmf_init_br" 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:01.043 Cannot find device "nvmf_init_br2" 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:01.043 Cannot find device "nvmf_tgt_br" 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:01.043 Cannot find device "nvmf_tgt_br2" 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:01.043 Cannot find device "nvmf_init_br" 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:01.043 Cannot find device "nvmf_init_br2" 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:01.043 Cannot find device "nvmf_tgt_br" 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:01.043 Cannot find device "nvmf_tgt_br2" 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:01.043 Cannot find device "nvmf_br" 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:01.043 Cannot find device "nvmf_init_if" 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:01.043 Cannot find device "nvmf_init_if2" 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:01.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:01.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:01.043 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:01.303 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:01.303 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:17:01.303 00:17:01.303 --- 10.0.0.3 ping statistics --- 00:17:01.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.303 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:01.303 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:01.303 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:17:01.303 00:17:01.303 --- 10.0.0.4 ping statistics --- 00:17:01.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.303 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:01.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:01.303 00:17:01.303 --- 10.0.0.1 ping statistics --- 00:17:01.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.303 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:01.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:17:01.303 00:17:01.303 --- 10.0.0.2 ping statistics --- 00:17:01.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.303 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@461 -- # return 0 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=80865 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 80865 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 80865 ']' 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:01.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:01.303 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:01.562 [2024-12-05 09:48:48.975654] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:01.562 [2024-12-05 09:48:48.975847] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.563 [2024-12-05 09:48:49.173951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.822 [2024-12-05 09:48:49.323233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.822 [2024-12-05 09:48:49.323309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.822 [2024-12-05 09:48:49.323329] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.822 [2024-12-05 09:48:49.323356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.822 [2024-12-05 09:48:49.323371] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.822 [2024-12-05 09:48:49.324646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.389 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.389 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:02.389 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:02.389 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:02.389 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:02.389 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.389 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:02.647 [2024-12-05 09:48:50.262202] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.906 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:02.906 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:02.906 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:03.163 Malloc1 00:17:03.163 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:03.422 Malloc2 00:17:03.422 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:03.680 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:03.938 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:04.196 [2024-12-05 09:48:51.643024] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:04.196 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:04.196 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0e1631f9-e797-4a3e-b017-41fc92f55bfc -a 10.0.0.3 -s 4420 -i 4 00:17:04.196 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:04.196 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:04.196 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:04.196 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:04.196 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:06.726 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:06.726 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:06.726 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:06.726 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:06.726 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:06.726 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:06.726 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:06.726 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:06.726 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:06.726 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:06.726 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:06.726 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:06.726 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:06.726 [ 0]:0x1 00:17:06.726 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:06.726 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:06.726 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3abbab2e07f44f4f9089c7da0f2832b0 00:17:06.726 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3abbab2e07f44f4f9089c7da0f2832b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:06.726 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:06.726 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:06.726 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:06.726 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:06.726 [ 0]:0x1 00:17:06.726 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:06.726 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:06.726 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3abbab2e07f44f4f9089c7da0f2832b0 00:17:06.726 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3abbab2e07f44f4f9089c7da0f2832b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:06.726 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:06.726 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:06.726 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:06.726 [ 1]:0x2 00:17:06.726 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:06.726 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:06.726 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c12b43b0b08a42568d05801d52b81f90 00:17:06.726 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c12b43b0b08a42568d05801d52b81f90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:06.726 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:06.726 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:06.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.984 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:07.241 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:07.500 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:07.500 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0e1631f9-e797-4a3e-b017-41fc92f55bfc -a 10.0.0.3 -s 4420 -i 4 00:17:07.758 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:07.758 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:07.758 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:07.758 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:07.758 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:07.758 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:09.656 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:09.656 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:09.656 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:09.656 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:09.656 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:09.657 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:09.657 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:09.657 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:09.657 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:09.657 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:09.657 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:09.657 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:09.657 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:09.657 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:09.657 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.657 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:09.657 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.657 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:09.657 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:09.657 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:09.657 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:09.657 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:09.916 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:09.916 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:09.916 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:09.916 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.916 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.916 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.916 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:09.916 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:09.916 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:09.916 [ 0]:0x2 00:17:09.916 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:09.916 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:09.916 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c12b43b0b08a42568d05801d52b81f90 00:17:09.916 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c12b43b0b08a42568d05801d52b81f90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:09.916 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:10.175 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:10.175 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:10.175 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:10.175 [ 0]:0x1 00:17:10.175 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:10.176 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:10.176 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3abbab2e07f44f4f9089c7da0f2832b0 00:17:10.176 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3abbab2e07f44f4f9089c7da0f2832b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:10.176 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:10.176 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:10.176 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:10.176 [ 1]:0x2 00:17:10.176 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:10.176 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:10.176 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c12b43b0b08a42568d05801d52b81f90 00:17:10.176 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c12b43b0b08a42568d05801d52b81f90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:10.176 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:10.435 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:10.435 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:10.435 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:10.435 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:10.435 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.435 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:10.435 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.435 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:10.435 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:10.435 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:10.435 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:10.435 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:10.695 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:10.695 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:10.695 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:10.695 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:10.695 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:10.695 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:10.695 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:10.695 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:10.695 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:10.695 [ 0]:0x2 00:17:10.695 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:10.695 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:10.695 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c12b43b0b08a42568d05801d52b81f90 00:17:10.695 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c12b43b0b08a42568d05801d52b81f90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:10.695 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:10.695 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:10.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.695 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:10.954 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:10.954 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0e1631f9-e797-4a3e-b017-41fc92f55bfc -a 10.0.0.3 -s 4420 -i 4 00:17:10.954 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:10.954 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:10.954 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:10.954 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:10.954 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:10.954 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:13.485 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:13.485 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:13.486 [ 0]:0x1 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3abbab2e07f44f4f9089c7da0f2832b0 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3abbab2e07f44f4f9089c7da0f2832b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:13.486 [ 1]:0x2 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c12b43b0b08a42568d05801d52b81f90 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c12b43b0b08a42568d05801d52b81f90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:13.486 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:13.486 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:13.486 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:13.486 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:13.486 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:13.486 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.486 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:13.486 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.486 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:13.486 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:13.486 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:13.486 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:13.486 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:13.745 [ 0]:0x2 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c12b43b0b08a42568d05801d52b81f90 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c12b43b0b08a42568d05801d52b81f90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:13.745 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:14.004 [2024-12-05 09:49:01.466117] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:14.004 2024/12/05 09:49:01 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:17:14.004 request: 00:17:14.004 { 00:17:14.004 "method": "nvmf_ns_remove_host", 00:17:14.004 "params": { 00:17:14.004 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:14.004 "nsid": 2, 00:17:14.004 "host": "nqn.2016-06.io.spdk:host1" 00:17:14.004 } 00:17:14.004 } 00:17:14.004 Got JSON-RPC error response 00:17:14.004 GoRPCClient: error on JSON-RPC call 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:14.004 [ 0]:0x2 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c12b43b0b08a42568d05801d52b81f90 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c12b43b0b08a42568d05801d52b81f90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:14.004 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:14.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.264 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:14.264 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=81253 00:17:14.264 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.264 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 81253 /var/tmp/host.sock 00:17:14.264 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 81253 ']' 00:17:14.264 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:14.264 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:14.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:14.264 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:14.264 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:14.264 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:14.264 [2024-12-05 09:49:01.808084] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:14.264 [2024-12-05 09:49:01.808277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81253 ] 00:17:14.523 [2024-12-05 09:49:02.001115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.523 [2024-12-05 09:49:02.148495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.460 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.460 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:15.460 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:15.719 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:15.979 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 73c7b5b7-9850-412c-acf3-56d6559d5f9b 00:17:15.979 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:15.979 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 73C7B5B79850412CACF356D6559D5F9B -i 00:17:16.239 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid d170579d-b594-4538-9510-3aa833a4f610 00:17:16.239 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:16.239 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g D170579DB594453895103AA833A4F610 -i 00:17:16.497 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:16.756 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:16.756 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:16.756 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:17.015 nvme0n1 00:17:17.015 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:17.015 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:17.583 nvme1n2 00:17:17.583 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:17.583 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:17.583 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:17.583 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:17.583 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:17.841 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:17.841 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:17.841 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:17.841 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:18.099 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 73c7b5b7-9850-412c-acf3-56d6559d5f9b == \7\3\c\7\b\5\b\7\-\9\8\5\0\-\4\1\2\c\-\a\c\f\3\-\5\6\d\6\5\5\9\d\5\f\9\b ]] 00:17:18.099 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:18.099 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:18.099 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:18.357 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ d170579d-b594-4538-9510-3aa833a4f610 == \d\1\7\0\5\7\9\d\-\b\5\9\4\-\4\5\3\8\-\9\5\1\0\-\3\a\a\8\3\3\a\4\f\6\1\0 ]] 00:17:18.357 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:18.615 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:18.873 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 73c7b5b7-9850-412c-acf3-56d6559d5f9b 00:17:18.873 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:18.873 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 73C7B5B79850412CACF356D6559D5F9B 00:17:18.873 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:18.873 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 73C7B5B79850412CACF356D6559D5F9B 00:17:18.873 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:18.873 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.873 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:18.873 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.873 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:18.873 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.873 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:18.873 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:18.873 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 73C7B5B79850412CACF356D6559D5F9B 00:17:19.132 [2024-12-05 09:49:06.508645] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:19.132 [2024-12-05 09:49:06.509181] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:19.132 [2024-12-05 09:49:06.509314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.132 2024/12/05 09:49:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid hide_metadata:%!s(bool=false) nguid:73C7B5B79850412CACF356D6559D5F9B no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:17:19.132 request: 00:17:19.132 { 00:17:19.132 "method": "nvmf_subsystem_add_ns", 00:17:19.132 "params": { 00:17:19.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.132 "namespace": { 00:17:19.132 "bdev_name": "invalid", 00:17:19.132 "nsid": 1, 00:17:19.132 "nguid": "73C7B5B79850412CACF356D6559D5F9B", 00:17:19.132 "no_auto_visible": false, 00:17:19.132 "hide_metadata": false 00:17:19.132 } 00:17:19.132 } 00:17:19.132 } 00:17:19.132 Got JSON-RPC error response 00:17:19.132 GoRPCClient: error on JSON-RPC call 00:17:19.132 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:19.132 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.132 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.132 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.132 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 73c7b5b7-9850-412c-acf3-56d6559d5f9b 00:17:19.132 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:19.132 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 73C7B5B79850412CACF356D6559D5F9B -i 00:17:19.132 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:21.665 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:21.665 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:21.665 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:21.665 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:21.665 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 81253 00:17:21.665 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 81253 ']' 00:17:21.665 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 81253 00:17:21.665 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:21.665 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.665 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81253 00:17:21.665 killing process with pid 81253 00:17:21.665 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:21.665 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:21.665 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81253' 00:17:21.665 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 81253 00:17:21.665 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 81253 00:17:23.568 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.568 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:23.568 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:23.568 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:23.568 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:23.568 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:23.568 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:23.568 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:23.568 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:23.568 rmmod nvme_tcp 00:17:23.827 rmmod nvme_fabrics 00:17:23.827 rmmod nvme_keyring 00:17:23.827 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:23.827 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:23.827 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:23.827 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 80865 ']' 00:17:23.827 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 80865 00:17:23.827 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 80865 ']' 00:17:23.828 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 80865 00:17:23.828 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:23.828 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.828 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80865 00:17:23.828 killing process with pid 80865 00:17:23.828 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:23.828 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:23.828 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80865' 00:17:23.828 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 80865 00:17:23.828 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 80865 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.205 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.464 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:17:25.464 00:17:25.464 real 0m24.691s 00:17:25.464 user 0m40.459s 00:17:25.464 sys 0m3.575s 00:17:25.464 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.464 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:25.464 ************************************ 00:17:25.464 END TEST nvmf_ns_masking 00:17:25.464 ************************************ 00:17:25.464 09:49:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:17:25.464 09:49:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:25.464 09:49:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:25.464 09:49:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:25.464 09:49:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.464 09:49:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:25.464 ************************************ 00:17:25.464 START TEST nvmf_vfio_user 00:17:25.464 ************************************ 00:17:25.464 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:25.464 * Looking for test storage... 00:17:25.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:25.464 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:25.464 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:17:25.464 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:25.464 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:25.723 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:25.723 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:25.723 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:25.723 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:25.723 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:25.723 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:25.723 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:25.723 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:25.723 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:25.723 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:25.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.723 --rc genhtml_branch_coverage=1 00:17:25.723 --rc genhtml_function_coverage=1 00:17:25.723 --rc genhtml_legend=1 00:17:25.723 --rc geninfo_all_blocks=1 00:17:25.723 --rc geninfo_unexecuted_blocks=1 00:17:25.723 00:17:25.723 ' 00:17:25.723 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:25.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.723 --rc genhtml_branch_coverage=1 00:17:25.723 --rc genhtml_function_coverage=1 00:17:25.723 --rc genhtml_legend=1 00:17:25.723 --rc geninfo_all_blocks=1 00:17:25.723 --rc geninfo_unexecuted_blocks=1 00:17:25.723 00:17:25.723 ' 00:17:25.723 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:25.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.723 --rc genhtml_branch_coverage=1 00:17:25.723 --rc genhtml_function_coverage=1 00:17:25.723 --rc genhtml_legend=1 00:17:25.723 --rc geninfo_all_blocks=1 00:17:25.723 --rc geninfo_unexecuted_blocks=1 00:17:25.723 00:17:25.723 ' 00:17:25.723 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:25.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.723 --rc genhtml_branch_coverage=1 00:17:25.723 --rc genhtml_function_coverage=1 00:17:25.723 --rc genhtml_legend=1 00:17:25.723 --rc geninfo_all_blocks=1 00:17:25.724 --rc geninfo_unexecuted_blocks=1 00:17:25.724 00:17:25.724 ' 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:25.724 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=81646 00:17:25.724 Process pid: 81646 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 81646' 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 81646 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 81646 ']' 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.724 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:25.724 [2024-12-05 09:49:13.229056] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:25.724 [2024-12-05 09:49:13.229211] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.983 [2024-12-05 09:49:13.410389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:25.983 [2024-12-05 09:49:13.586874] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.983 [2024-12-05 09:49:13.586984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.983 [2024-12-05 09:49:13.587014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.983 [2024-12-05 09:49:13.587032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.983 [2024-12-05 09:49:13.587050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.983 [2024-12-05 09:49:13.590072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.983 [2024-12-05 09:49:13.590211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.983 [2024-12-05 09:49:13.590357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.983 [2024-12-05 09:49:13.590370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:26.967 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.967 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:26.967 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:27.916 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:28.175 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:28.175 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:28.175 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:28.175 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:28.175 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:28.434 Malloc1 00:17:28.434 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:28.692 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:28.951 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:29.210 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:29.210 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:29.210 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:29.469 Malloc2 00:17:29.469 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:29.726 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:29.984 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:30.243 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:30.243 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:30.243 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:30.243 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:30.243 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:30.243 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:30.243 [2024-12-05 09:49:17.678913] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:30.243 [2024-12-05 09:49:17.679036] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81778 ] 00:17:30.243 [2024-12-05 09:49:17.852262] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:30.243 [2024-12-05 09:49:17.858756] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:30.243 [2024-12-05 09:49:17.858797] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7feb2f7be000 00:17:30.243 [2024-12-05 09:49:17.859706] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:30.243 [2024-12-05 09:49:17.860692] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:30.243 [2024-12-05 09:49:17.862970] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:30.243 [2024-12-05 09:49:17.863722] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:30.243 [2024-12-05 09:49:17.864700] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:30.243 [2024-12-05 09:49:17.865705] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:30.243 [2024-12-05 09:49:17.866734] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:30.243 [2024-12-05 09:49:17.867740] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:30.243 [2024-12-05 09:49:17.868743] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:30.243 [2024-12-05 09:49:17.868788] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7feb2f7b3000 00:17:30.243 [2024-12-05 09:49:17.870353] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:30.504 [2024-12-05 09:49:17.886825] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:30.504 [2024-12-05 09:49:17.886891] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:17:30.504 [2024-12-05 09:49:17.891923] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:30.504 [2024-12-05 09:49:17.892087] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:30.504 [2024-12-05 09:49:17.892618] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:17:30.504 [2024-12-05 09:49:17.892670] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:17:30.504 [2024-12-05 09:49:17.892683] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:17:30.504 [2024-12-05 09:49:17.892911] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:30.504 [2024-12-05 09:49:17.892959] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:17:30.504 [2024-12-05 09:49:17.892991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:17:30.504 [2024-12-05 09:49:17.893924] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:30.504 [2024-12-05 09:49:17.894004] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:17:30.504 [2024-12-05 09:49:17.894024] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:30.504 [2024-12-05 09:49:17.894923] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:30.504 [2024-12-05 09:49:17.894980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:30.504 [2024-12-05 09:49:17.895923] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:30.504 [2024-12-05 09:49:17.895965] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:30.504 [2024-12-05 09:49:17.895985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:30.504 [2024-12-05 09:49:17.896004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:30.504 [2024-12-05 09:49:17.896117] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:17:30.504 [2024-12-05 09:49:17.896126] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:30.504 [2024-12-05 09:49:17.896137] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:30.504 [2024-12-05 09:49:17.896978] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:30.504 [2024-12-05 09:49:17.897962] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:30.504 [2024-12-05 09:49:17.898985] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:30.504 [2024-12-05 09:49:17.899968] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:30.504 [2024-12-05 09:49:17.900127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:30.504 [2024-12-05 09:49:17.901019] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:30.504 [2024-12-05 09:49:17.901065] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:30.504 [2024-12-05 09:49:17.901077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:30.504 [2024-12-05 09:49:17.901104] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:17:30.504 [2024-12-05 09:49:17.901123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:30.504 [2024-12-05 09:49:17.901152] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:30.504 [2024-12-05 09:49:17.901162] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:30.504 [2024-12-05 09:49:17.901178] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:30.504 [2024-12-05 09:49:17.901207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:30.504 [2024-12-05 09:49:17.901374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:30.504 [2024-12-05 09:49:17.901400] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:17:30.504 [2024-12-05 09:49:17.901413] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:17:30.504 [2024-12-05 09:49:17.901421] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:17:30.504 [2024-12-05 09:49:17.901431] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:30.504 [2024-12-05 09:49:17.901440] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:17:30.504 [2024-12-05 09:49:17.901451] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:17:30.504 [2024-12-05 09:49:17.901459] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:17:30.504 [2024-12-05 09:49:17.901488] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:30.505 [2024-12-05 09:49:17.901510] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:30.505 [2024-12-05 09:49:17.901546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:30.505 [2024-12-05 09:49:17.901568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.505 [2024-12-05 09:49:17.901587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.505 [2024-12-05 09:49:17.901602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.505 [2024-12-05 09:49:17.901618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.505 [2024-12-05 09:49:17.901626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:30.505 [2024-12-05 09:49:17.901643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:30.505 [2024-12-05 09:49:17.901657] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:30.505 [2024-12-05 09:49:17.901675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:30.505 [2024-12-05 09:49:17.901686] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:17:30.505 [2024-12-05 09:49:17.901697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:30.505 [2024-12-05 09:49:17.901709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:17:30.505 [2024-12-05 09:49:17.901721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:30.505 [2024-12-05 09:49:17.901735] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:30.505 [2024-12-05 09:49:17.901760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:30.505 [2024-12-05 09:49:17.901882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:17:30.505 [2024-12-05 09:49:17.901910] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:30.505 [2024-12-05 09:49:17.901930] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:30.505 [2024-12-05 09:49:17.901942] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:30.505 [2024-12-05 09:49:17.901969] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:30.505 [2024-12-05 09:49:17.901985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:30.505 [2024-12-05 09:49:17.902027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:30.505 [2024-12-05 09:49:17.902064] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:17:30.505 [2024-12-05 09:49:17.902085] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:17:30.505 [2024-12-05 09:49:17.902112] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:30.505 [2024-12-05 09:49:17.902135] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:30.505 [2024-12-05 09:49:17.902155] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:30.505 [2024-12-05 09:49:17.902161] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:30.505 [2024-12-05 09:49:17.902175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:30.505 [2024-12-05 09:49:17.902219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:30.505 [2024-12-05 09:49:17.902295] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:30.505 [2024-12-05 09:49:17.902317] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:30.505 [2024-12-05 09:49:17.902348] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:30.505 [2024-12-05 09:49:17.902357] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:30.505 [2024-12-05 09:49:17.902365] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:30.505 [2024-12-05 09:49:17.902379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:30.505 [2024-12-05 09:49:17.902408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:30.505 [2024-12-05 09:49:17.902438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:30.505 [2024-12-05 09:49:17.902454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:30.505 [2024-12-05 09:49:17.902470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:17:30.505 [2024-12-05 09:49:17.902485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:30.505 [2024-12-05 09:49:17.902493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:30.505 [2024-12-05 09:49:17.902508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:17:30.505 [2024-12-05 09:49:17.902519] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:30.505 [2024-12-05 09:49:17.902530] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:17:30.505 [2024-12-05 09:49:17.902539] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:17:30.505 [2024-12-05 09:49:17.902585] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:30.505 [2024-12-05 09:49:17.902603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:30.505 [2024-12-05 09:49:17.902625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:30.505 [2024-12-05 09:49:17.902636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:30.505 [2024-12-05 09:49:17.902655] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:30.505 [2024-12-05 09:49:17.902669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:30.505 [2024-12-05 09:49:17.902688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:30.505 [2024-12-05 09:49:17.902701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:30.505 [2024-12-05 09:49:17.902730] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:30.505 [2024-12-05 09:49:17.902742] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:30.505 [2024-12-05 09:49:17.902751] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:30.505 [2024-12-05 09:49:17.902757] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:30.505 [2024-12-05 09:49:17.902767] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:30.505 [2024-12-05 09:49:17.902778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:30.505 [2024-12-05 09:49:17.902794] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:30.505 [2024-12-05 09:49:17.902806] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:30.505 [2024-12-05 09:49:17.902817] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:30.505 [2024-12-05 09:49:17.902827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:30.505 [2024-12-05 09:49:17.902847] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:30.505 [2024-12-05 09:49:17.902855] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:30.505 [2024-12-05 09:49:17.902866] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:30.505 [2024-12-05 09:49:17.902879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:30.505 [2024-12-05 09:49:17.902901] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:30.505 [2024-12-05 09:49:17.902909] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:30.505 [2024-12-05 09:49:17.902917] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:30.505 [2024-12-05 09:49:17.902930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:30.505 [2024-12-05 09:49:17.902969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:30.505 ===================================================== 00:17:30.505 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:30.505 ===================================================== 00:17:30.505 Controller Capabilities/Features 00:17:30.505 ================================ 00:17:30.505 Vendor ID: 4e58 00:17:30.505 Subsystem Vendor ID: 4e58 00:17:30.505 Serial Number: SPDK1 00:17:30.505 Model Number: SPDK bdev Controller 00:17:30.505 Firmware Version: 25.01 00:17:30.505 Recommended Arb Burst: 6 00:17:30.505 IEEE OUI Identifier: 8d 6b 50 00:17:30.505 Multi-path I/O 00:17:30.505 May have multiple subsystem ports: Yes 00:17:30.505 May have multiple controllers: Yes 00:17:30.505 Associated with SR-IOV VF: No 00:17:30.505 Max Data Transfer Size: 131072 00:17:30.505 Max Number of Namespaces: 32 00:17:30.505 Max Number of I/O Queues: 127 00:17:30.506 NVMe Specification Version (VS): 1.3 00:17:30.506 NVMe Specification Version (Identify): 1.3 00:17:30.506 Maximum Queue Entries: 256 00:17:30.506 Contiguous Queues Required: Yes 00:17:30.506 Arbitration Mechanisms Supported 00:17:30.506 Weighted Round Robin: Not Supported 00:17:30.506 Vendor Specific: Not Supported 00:17:30.506 Reset Timeout: 15000 ms 00:17:30.506 Doorbell Stride: 4 bytes 00:17:30.506 NVM Subsystem Reset: Not Supported 00:17:30.506 Command Sets Supported 00:17:30.506 NVM Command Set: Supported 00:17:30.506 Boot Partition: Not Supported 00:17:30.506 Memory Page Size Minimum: 4096 bytes 00:17:30.506 Memory Page Size Maximum: 4096 bytes 00:17:30.506 Persistent Memory Region: Not Supported 00:17:30.506 Optional Asynchronous Events Supported 00:17:30.506 Namespace Attribute Notices: Supported 00:17:30.506 Firmware Activation Notices: Not Supported 00:17:30.506 ANA Change Notices: Not Supported 00:17:30.506 PLE Aggregate Log Change Notices: Not Supported 00:17:30.506 LBA Status Info Alert Notices: Not Supported 00:17:30.506 EGE Aggregate Log Change Notices: Not Supported 00:17:30.506 Normal NVM Subsystem Shutdown event: Not Supported 00:17:30.506 Zone Descriptor Change Notices: Not Supported 00:17:30.506 Discovery Log Change Notices: Not Supported 00:17:30.506 Controller Attributes 00:17:30.506 128-bit Host Identifier: Supported 00:17:30.506 Non-Operational Permissive Mode: Not Supported 00:17:30.506 NVM Sets: Not Supported 00:17:30.506 Read Recovery Levels: Not Supported 00:17:30.506 Endurance Groups: Not Supported 00:17:30.506 Predictable Latency Mode: Not Supported 00:17:30.506 Traffic Based Keep ALive: Not Supported 00:17:30.506 Namespace Granularity: Not Supported 00:17:30.506 SQ Associations: Not Supported 00:17:30.506 UUID List: Not Supported 00:17:30.506 Multi-Domain Subsystem: Not Supported 00:17:30.506 Fixed Capacity Management: Not Supported 00:17:30.506 Variable Capacity Management: Not Supported 00:17:30.506 Delete Endurance Group: Not Supported 00:17:30.506 Delete NVM Set: Not Supported 00:17:30.506 Extended LBA Formats Supported: Not Supported 00:17:30.506 Flexible Data Placement Supported: Not Supported 00:17:30.506 00:17:30.506 Controller Memory Buffer Support 00:17:30.506 ================================ 00:17:30.506 Supported: No 00:17:30.506 00:17:30.506 Persistent Memory Region Support 00:17:30.506 ================================ 00:17:30.506 Supported: No 00:17:30.506 00:17:30.506 Admin Command Set Attributes 00:17:30.506 ============================ 00:17:30.506 Security Send/Receive: Not Supported 00:17:30.506 Format NVM: Not Supported 00:17:30.506 Firmware Activate/Download: Not Supported 00:17:30.506 Namespace Management: Not Supported 00:17:30.506 Device Self-Test: Not Supported 00:17:30.506 Directives: Not Supported 00:17:30.506 NVMe-MI: Not Supported 00:17:30.506 Virtualization Management: Not Supported 00:17:30.506 Doorbell Buffer Config: Not Supported 00:17:30.506 Get LBA Status Capability: Not Supported 00:17:30.506 Command & Feature Lockdown Capability: Not Supported 00:17:30.506 Abort Command Limit: 4 00:17:30.506 Async Event Request Limit: 4 00:17:30.506 Number of Firmware Slots: N/A 00:17:30.506 Firmware Slot 1 Read-Only: N/A 00:17:30.506 Firmware Activation Witho[2024-12-05 09:49:17.902998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:30.506 [2024-12-05 09:49:17.903016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:30.506 [2024-12-05 09:49:17.903028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:30.506 ut Reset: N/A 00:17:30.506 Multiple Update Detection Support: N/A 00:17:30.506 Firmware Update Granularity: No Information Provided 00:17:30.506 Per-Namespace SMART Log: No 00:17:30.506 Asymmetric Namespace Access Log Page: Not Supported 00:17:30.506 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:30.506 Command Effects Log Page: Supported 00:17:30.506 Get Log Page Extended Data: Supported 00:17:30.506 Telemetry Log Pages: Not Supported 00:17:30.506 Persistent Event Log Pages: Not Supported 00:17:30.506 Supported Log Pages Log Page: May Support 00:17:30.506 Commands Supported & Effects Log Page: Not Supported 00:17:30.506 Feature Identifiers & Effects Log Page:May Support 00:17:30.506 NVMe-MI Commands & Effects Log Page: May Support 00:17:30.506 Data Area 4 for Telemetry Log: Not Supported 00:17:30.506 Error Log Page Entries Supported: 128 00:17:30.506 Keep Alive: Supported 00:17:30.506 Keep Alive Granularity: 10000 ms 00:17:30.506 00:17:30.506 NVM Command Set Attributes 00:17:30.506 ========================== 00:17:30.506 Submission Queue Entry Size 00:17:30.506 Max: 64 00:17:30.506 Min: 64 00:17:30.506 Completion Queue Entry Size 00:17:30.506 Max: 16 00:17:30.506 Min: 16 00:17:30.506 Number of Namespaces: 32 00:17:30.506 Compare Command: Supported 00:17:30.506 Write Uncorrectable Command: Not Supported 00:17:30.506 Dataset Management Command: Supported 00:17:30.506 Write Zeroes Command: Supported 00:17:30.506 Set Features Save Field: Not Supported 00:17:30.506 Reservations: Not Supported 00:17:30.506 Timestamp: Not Supported 00:17:30.506 Copy: Supported 00:17:30.506 Volatile Write Cache: Present 00:17:30.506 Atomic Write Unit (Normal): 1 00:17:30.506 Atomic Write Unit (PFail): 1 00:17:30.506 Atomic Compare & Write Unit: 1 00:17:30.506 Fused Compare & Write: Supported 00:17:30.506 Scatter-Gather List 00:17:30.506 SGL Command Set: Supported (Dword aligned) 00:17:30.506 SGL Keyed: Not Supported 00:17:30.506 SGL Bit Bucket Descriptor: Not Supported 00:17:30.506 SGL Metadata Pointer: Not Supported 00:17:30.506 Oversized SGL: Not Supported 00:17:30.506 SGL Metadata Address: Not Supported 00:17:30.506 SGL Offset: Not Supported 00:17:30.506 Transport SGL Data Block: Not Supported 00:17:30.506 Replay Protected Memory Block: Not Supported 00:17:30.506 00:17:30.506 Firmware Slot Information 00:17:30.506 ========================= 00:17:30.506 Active slot: 1 00:17:30.506 Slot 1 Firmware Revision: 25.01 00:17:30.506 00:17:30.506 00:17:30.506 Commands Supported and Effects 00:17:30.506 ============================== 00:17:30.506 Admin Commands 00:17:30.506 -------------- 00:17:30.506 Get Log Page (02h): Supported 00:17:30.506 Identify (06h): Supported 00:17:30.506 Abort (08h): Supported 00:17:30.506 Set Features (09h): Supported 00:17:30.506 Get Features (0Ah): Supported 00:17:30.506 Asynchronous Event Request (0Ch): Supported 00:17:30.506 Keep Alive (18h): Supported 00:17:30.506 I/O Commands 00:17:30.506 ------------ 00:17:30.506 Flush (00h): Supported LBA-Change 00:17:30.506 Write (01h): Supported LBA-Change 00:17:30.506 Read (02h): Supported 00:17:30.506 Compare (05h): Supported 00:17:30.506 Write Zeroes (08h): Supported LBA-Change 00:17:30.506 Dataset Management (09h): Supported LBA-Change 00:17:30.506 Copy (19h): Supported LBA-Change 00:17:30.506 00:17:30.506 Error Log 00:17:30.506 ========= 00:17:30.506 00:17:30.506 Arbitration 00:17:30.506 =========== 00:17:30.506 Arbitration Burst: 1 00:17:30.506 00:17:30.506 Power Management 00:17:30.506 ================ 00:17:30.506 Number of Power States: 1 00:17:30.506 Current Power State: Power State #0 00:17:30.506 Power State #0: 00:17:30.506 Max Power: 0.00 W 00:17:30.506 Non-Operational State: Operational 00:17:30.506 Entry Latency: Not Reported 00:17:30.506 Exit Latency: Not Reported 00:17:30.506 Relative Read Throughput: 0 00:17:30.506 Relative Read Latency: 0 00:17:30.506 Relative Write Throughput: 0 00:17:30.506 Relative Write Latency: 0 00:17:30.506 Idle Power: Not Reported 00:17:30.506 Active Power: Not Reported 00:17:30.506 Non-Operational Permissive Mode: Not Supported 00:17:30.506 00:17:30.506 Health Information 00:17:30.506 ================== 00:17:30.506 Critical Warnings: 00:17:30.506 Available Spare Space: OK 00:17:30.506 Temperature: OK 00:17:30.506 Device Reliability: OK 00:17:30.506 Read Only: No 00:17:30.506 Volatile Memory Backup: OK 00:17:30.506 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:30.506 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:30.506 Available Spare: 0% 00:17:30.506 Available Sp[2024-12-05 09:49:17.903229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:30.506 [2024-12-05 09:49:17.903253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:30.506 [2024-12-05 09:49:17.903351] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:17:30.506 [2024-12-05 09:49:17.903374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.507 [2024-12-05 09:49:17.903388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.507 [2024-12-05 09:49:17.903397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.507 [2024-12-05 09:49:17.903409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.507 [2024-12-05 09:49:17.908968] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:30.507 [2024-12-05 09:49:17.909020] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:30.507 [2024-12-05 09:49:17.910065] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:30.507 [2024-12-05 09:49:17.910212] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:17:30.507 [2024-12-05 09:49:17.910249] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:17:30.507 [2024-12-05 09:49:17.911081] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:30.507 [2024-12-05 09:49:17.911128] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:17:30.507 [2024-12-05 09:49:17.911651] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:30.507 [2024-12-05 09:49:17.913169] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:30.507 are Threshold: 0% 00:17:30.507 Life Percentage Used: 0% 00:17:30.507 Data Units Read: 0 00:17:30.507 Data Units Written: 0 00:17:30.507 Host Read Commands: 0 00:17:30.507 Host Write Commands: 0 00:17:30.507 Controller Busy Time: 0 minutes 00:17:30.507 Power Cycles: 0 00:17:30.507 Power On Hours: 0 hours 00:17:30.507 Unsafe Shutdowns: 0 00:17:30.507 Unrecoverable Media Errors: 0 00:17:30.507 Lifetime Error Log Entries: 0 00:17:30.507 Warning Temperature Time: 0 minutes 00:17:30.507 Critical Temperature Time: 0 minutes 00:17:30.507 00:17:30.507 Number of Queues 00:17:30.507 ================ 00:17:30.507 Number of I/O Submission Queues: 127 00:17:30.507 Number of I/O Completion Queues: 127 00:17:30.507 00:17:30.507 Active Namespaces 00:17:30.507 ================= 00:17:30.507 Namespace ID:1 00:17:30.507 Error Recovery Timeout: Unlimited 00:17:30.507 Command Set Identifier: NVM (00h) 00:17:30.507 Deallocate: Supported 00:17:30.507 Deallocated/Unwritten Error: Not Supported 00:17:30.507 Deallocated Read Value: Unknown 00:17:30.507 Deallocate in Write Zeroes: Not Supported 00:17:30.507 Deallocated Guard Field: 0xFFFF 00:17:30.507 Flush: Supported 00:17:30.507 Reservation: Supported 00:17:30.507 Namespace Sharing Capabilities: Multiple Controllers 00:17:30.507 Size (in LBAs): 131072 (0GiB) 00:17:30.507 Capacity (in LBAs): 131072 (0GiB) 00:17:30.507 Utilization (in LBAs): 131072 (0GiB) 00:17:30.507 NGUID: 3CC96ACCFB0C4D6786159BB96373816A 00:17:30.507 UUID: 3cc96acc-fb0c-4d67-8615-9bb96373816a 00:17:30.507 Thin Provisioning: Not Supported 00:17:30.507 Per-NS Atomic Units: Yes 00:17:30.507 Atomic Boundary Size (Normal): 0 00:17:30.507 Atomic Boundary Size (PFail): 0 00:17:30.507 Atomic Boundary Offset: 0 00:17:30.507 Maximum Single Source Range Length: 65535 00:17:30.507 Maximum Copy Length: 65535 00:17:30.507 Maximum Source Range Count: 1 00:17:30.507 NGUID/EUI64 Never Reused: No 00:17:30.507 Namespace Write Protected: No 00:17:30.507 Number of LBA Formats: 1 00:17:30.507 Current LBA Format: LBA Format #00 00:17:30.507 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:30.507 00:17:30.507 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:30.766 [2024-12-05 09:49:18.363591] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:36.038 Initializing NVMe Controllers 00:17:36.038 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:36.038 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:36.038 Initialization complete. Launching workers. 00:17:36.038 ======================================================== 00:17:36.038 Latency(us) 00:17:36.038 Device Information : IOPS MiB/s Average min max 00:17:36.038 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 27536.73 107.57 4643.85 1241.75 11599.55 00:17:36.038 ======================================================== 00:17:36.038 Total : 27536.73 107.57 4643.85 1241.75 11599.55 00:17:36.038 00:17:36.038 [2024-12-05 09:49:23.378720] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:36.038 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:36.297 [2024-12-05 09:49:23.828438] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:41.562 Initializing NVMe Controllers 00:17:41.562 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:41.562 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:41.562 Initialization complete. Launching workers. 00:17:41.562 ======================================================== 00:17:41.562 Latency(us) 00:17:41.562 Device Information : IOPS MiB/s Average min max 00:17:41.562 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16081.00 62.82 7969.56 6012.59 14705.87 00:17:41.562 ======================================================== 00:17:41.562 Total : 16081.00 62.82 7969.56 6012.59 14705.87 00:17:41.562 00:17:41.562 [2024-12-05 09:49:28.852672] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:41.562 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:41.820 [2024-12-05 09:49:29.230971] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:47.093 [2024-12-05 09:49:34.309025] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:47.093 Initializing NVMe Controllers 00:17:47.093 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:47.093 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:47.093 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:47.093 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:47.093 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:47.093 Initialization complete. Launching workers. 00:17:47.093 Starting thread on core 2 00:17:47.093 Starting thread on core 3 00:17:47.093 Starting thread on core 1 00:17:47.093 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:47.351 [2024-12-05 09:49:34.796024] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:50.638 [2024-12-05 09:49:37.921594] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:50.638 Initializing NVMe Controllers 00:17:50.638 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:50.638 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:50.638 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:50.638 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:50.638 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:50.638 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:50.638 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:17:50.638 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:50.638 Initialization complete. Launching workers. 00:17:50.638 Starting thread on core 1 with urgent priority queue 00:17:50.638 Starting thread on core 2 with urgent priority queue 00:17:50.638 Starting thread on core 3 with urgent priority queue 00:17:50.638 Starting thread on core 0 with urgent priority queue 00:17:50.638 SPDK bdev Controller (SPDK1 ) core 0: 1045.33 IO/s 95.66 secs/100000 ios 00:17:50.638 SPDK bdev Controller (SPDK1 ) core 1: 554.67 IO/s 180.29 secs/100000 ios 00:17:50.638 SPDK bdev Controller (SPDK1 ) core 2: 1088.00 IO/s 91.91 secs/100000 ios 00:17:50.638 SPDK bdev Controller (SPDK1 ) core 3: 597.33 IO/s 167.41 secs/100000 ios 00:17:50.638 ======================================================== 00:17:50.638 00:17:50.638 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:50.898 [2024-12-05 09:49:38.409121] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:50.898 Initializing NVMe Controllers 00:17:50.898 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:50.898 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:50.898 Namespace ID: 1 size: 0GB 00:17:50.898 Initialization complete. 00:17:50.898 INFO: using host memory buffer for IO 00:17:50.898 Hello world! 00:17:50.898 [2024-12-05 09:49:38.444882] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:51.157 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:51.416 [2024-12-05 09:49:38.893320] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:52.355 Initializing NVMe Controllers 00:17:52.355 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:52.355 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:52.355 Initialization complete. Launching workers. 00:17:52.355 submit (in ns) avg, min, max = 8572.4, 3659.1, 4030240.9 00:17:52.355 complete (in ns) avg, min, max = 31698.1, 2120.9, 7033496.4 00:17:52.355 00:17:52.355 Submit histogram 00:17:52.355 ================ 00:17:52.355 Range in us Cumulative Count 00:17:52.355 3.651 - 3.665: 0.0183% ( 2) 00:17:52.355 3.665 - 3.680: 0.0640% ( 5) 00:17:52.355 3.680 - 3.695: 0.3201% ( 28) 00:17:52.355 3.695 - 3.709: 0.9877% ( 73) 00:17:52.355 3.709 - 3.724: 2.0029% ( 111) 00:17:52.355 3.724 - 3.753: 6.4569% ( 487) 00:17:52.355 3.753 - 3.782: 13.2980% ( 748) 00:17:52.355 3.782 - 3.811: 21.3554% ( 881) 00:17:52.355 3.811 - 3.840: 30.2908% ( 977) 00:17:52.355 3.840 - 3.869: 39.0434% ( 957) 00:17:52.355 3.869 - 3.898: 48.4635% ( 1030) 00:17:52.355 3.898 - 3.927: 55.8167% ( 804) 00:17:52.355 3.927 - 3.956: 61.5969% ( 632) 00:17:52.355 3.956 - 3.985: 66.4075% ( 526) 00:17:52.355 3.985 - 4.015: 70.4408% ( 441) 00:17:52.355 4.015 - 4.044: 73.6236% ( 348) 00:17:52.355 4.044 - 4.073: 76.1021% ( 271) 00:17:52.355 4.073 - 4.102: 78.0684% ( 215) 00:17:52.355 4.102 - 4.131: 79.9250% ( 203) 00:17:52.355 4.131 - 4.160: 81.7084% ( 195) 00:17:52.355 4.160 - 4.189: 83.7114% ( 219) 00:17:52.355 4.189 - 4.218: 85.4125% ( 186) 00:17:52.355 4.218 - 4.247: 87.0404% ( 178) 00:17:52.355 4.247 - 4.276: 88.5312% ( 163) 00:17:52.355 4.276 - 4.305: 90.0585% ( 167) 00:17:52.355 4.305 - 4.335: 91.0920% ( 113) 00:17:52.355 4.335 - 4.364: 92.1712% ( 118) 00:17:52.355 4.364 - 4.393: 93.0126% ( 92) 00:17:52.355 4.393 - 4.422: 93.5797% ( 62) 00:17:52.355 4.422 - 4.451: 94.1284% ( 60) 00:17:52.355 4.451 - 4.480: 94.5034% ( 41) 00:17:52.355 4.480 - 4.509: 94.8052% ( 33) 00:17:52.355 4.509 - 4.538: 95.1619% ( 39) 00:17:52.355 4.538 - 4.567: 95.3905% ( 25) 00:17:52.355 4.567 - 4.596: 95.5917% ( 22) 00:17:52.355 4.596 - 4.625: 95.7106% ( 13) 00:17:52.355 4.625 - 4.655: 95.8478% ( 15) 00:17:52.355 4.655 - 4.684: 95.9393% ( 10) 00:17:52.355 4.684 - 4.713: 96.0216% ( 9) 00:17:52.355 4.713 - 4.742: 96.0948% ( 8) 00:17:52.355 4.742 - 4.771: 96.1405% ( 5) 00:17:52.355 4.771 - 4.800: 96.1954% ( 6) 00:17:52.355 4.800 - 4.829: 96.2228% ( 3) 00:17:52.355 4.829 - 4.858: 96.2411% ( 2) 00:17:52.355 4.887 - 4.916: 96.2502% ( 1) 00:17:52.355 4.916 - 4.945: 96.2594% ( 1) 00:17:52.355 4.945 - 4.975: 96.2777% ( 2) 00:17:52.355 4.975 - 5.004: 96.2960% ( 2) 00:17:52.355 5.004 - 5.033: 96.3051% ( 1) 00:17:52.355 5.178 - 5.207: 96.3142% ( 1) 00:17:52.355 5.207 - 5.236: 96.3234% ( 1) 00:17:52.355 5.236 - 5.265: 96.3325% ( 1) 00:17:52.355 5.353 - 5.382: 96.3417% ( 1) 00:17:52.356 5.527 - 5.556: 96.3600% ( 2) 00:17:52.356 5.702 - 5.731: 96.3783% ( 2) 00:17:52.356 5.731 - 5.760: 96.3874% ( 1) 00:17:52.356 5.789 - 5.818: 96.4057% ( 2) 00:17:52.356 5.847 - 5.876: 96.4240% ( 2) 00:17:52.356 5.876 - 5.905: 96.4697% ( 5) 00:17:52.356 5.905 - 5.935: 96.5155% ( 5) 00:17:52.356 5.935 - 5.964: 96.5429% ( 3) 00:17:52.356 5.964 - 5.993: 96.5703% ( 3) 00:17:52.356 5.993 - 6.022: 96.6161% ( 5) 00:17:52.356 6.022 - 6.051: 96.6801% ( 7) 00:17:52.356 6.051 - 6.080: 96.7624% ( 9) 00:17:52.356 6.080 - 6.109: 96.8173% ( 6) 00:17:52.356 6.109 - 6.138: 96.8630% ( 5) 00:17:52.356 6.138 - 6.167: 96.9362% ( 8) 00:17:52.356 6.167 - 6.196: 97.0093% ( 8) 00:17:52.356 6.196 - 6.225: 97.0642% ( 6) 00:17:52.356 6.225 - 6.255: 97.0916% ( 3) 00:17:52.356 6.255 - 6.284: 97.1374% ( 5) 00:17:52.356 6.284 - 6.313: 97.1831% ( 5) 00:17:52.356 6.313 - 6.342: 97.2197% ( 4) 00:17:52.356 6.342 - 6.371: 97.2563% ( 4) 00:17:52.356 6.371 - 6.400: 97.2928% ( 4) 00:17:52.356 6.400 - 6.429: 97.3294% ( 4) 00:17:52.356 6.429 - 6.458: 97.3477% ( 2) 00:17:52.356 6.458 - 6.487: 97.3569% ( 1) 00:17:52.356 6.516 - 6.545: 97.3935% ( 4) 00:17:52.356 6.545 - 6.575: 97.4209% ( 3) 00:17:52.356 6.575 - 6.604: 97.4300% ( 1) 00:17:52.356 6.604 - 6.633: 97.4392% ( 1) 00:17:52.356 6.633 - 6.662: 97.4575% ( 2) 00:17:52.356 6.662 - 6.691: 97.4758% ( 2) 00:17:52.356 6.720 - 6.749: 97.4849% ( 1) 00:17:52.356 6.778 - 6.807: 97.5215% ( 4) 00:17:52.356 6.807 - 6.836: 97.5398% ( 2) 00:17:52.356 6.836 - 6.865: 97.5672% ( 3) 00:17:52.356 6.895 - 6.924: 97.5947% ( 3) 00:17:52.356 6.924 - 6.953: 97.6130% ( 2) 00:17:52.356 6.953 - 6.982: 97.6404% ( 3) 00:17:52.356 6.982 - 7.011: 97.6495% ( 1) 00:17:52.356 7.011 - 7.040: 97.6678% ( 2) 00:17:52.356 7.040 - 7.069: 97.6953% ( 3) 00:17:52.356 7.069 - 7.098: 97.7044% ( 1) 00:17:52.356 7.098 - 7.127: 97.7136% ( 1) 00:17:52.356 7.127 - 7.156: 97.7318% ( 2) 00:17:52.356 7.156 - 7.185: 97.7501% ( 2) 00:17:52.356 7.185 - 7.215: 97.7593% ( 1) 00:17:52.356 7.244 - 7.273: 97.7684% ( 1) 00:17:52.356 7.273 - 7.302: 97.7776% ( 1) 00:17:52.356 7.302 - 7.331: 97.8050% ( 3) 00:17:52.356 7.331 - 7.360: 97.8233% ( 2) 00:17:52.356 7.360 - 7.389: 97.8324% ( 1) 00:17:52.356 7.389 - 7.418: 97.8416% ( 1) 00:17:52.356 7.447 - 7.505: 97.8782% ( 4) 00:17:52.356 7.505 - 7.564: 97.9422% ( 7) 00:17:52.356 7.564 - 7.622: 97.9605% ( 2) 00:17:52.356 7.622 - 7.680: 98.0062% ( 5) 00:17:52.356 7.680 - 7.738: 98.0519% ( 5) 00:17:52.356 7.738 - 7.796: 98.0885% ( 4) 00:17:52.356 7.796 - 7.855: 98.1343% ( 5) 00:17:52.356 7.855 - 7.913: 98.1800% ( 5) 00:17:52.356 7.913 - 7.971: 98.2074% ( 3) 00:17:52.356 7.971 - 8.029: 98.2440% ( 4) 00:17:52.356 8.029 - 8.087: 98.3080% ( 7) 00:17:52.356 8.087 - 8.145: 98.3446% ( 4) 00:17:52.356 8.145 - 8.204: 98.3995% ( 6) 00:17:52.356 8.262 - 8.320: 98.4178% ( 2) 00:17:52.356 8.320 - 8.378: 98.4361% ( 2) 00:17:52.356 8.378 - 8.436: 98.4452% ( 1) 00:17:52.356 8.495 - 8.553: 98.4909% ( 5) 00:17:52.356 8.553 - 8.611: 98.5275% ( 4) 00:17:52.356 8.611 - 8.669: 98.5550% ( 3) 00:17:52.356 8.669 - 8.727: 98.5824% ( 3) 00:17:52.356 8.785 - 8.844: 98.6007% ( 2) 00:17:52.356 8.844 - 8.902: 98.6190% ( 2) 00:17:52.356 8.902 - 8.960: 98.6373% ( 2) 00:17:52.356 8.960 - 9.018: 98.6464% ( 1) 00:17:52.356 9.018 - 9.076: 98.6830% ( 4) 00:17:52.356 9.076 - 9.135: 98.7196% ( 4) 00:17:52.356 9.135 - 9.193: 98.7287% ( 1) 00:17:52.356 9.193 - 9.251: 98.7562% ( 3) 00:17:52.356 9.251 - 9.309: 98.7653% ( 1) 00:17:52.356 9.309 - 9.367: 98.7745% ( 1) 00:17:52.356 9.367 - 9.425: 98.7836% ( 1) 00:17:52.356 9.425 - 9.484: 98.7928% ( 1) 00:17:52.356 9.775 - 9.833: 98.8110% ( 2) 00:17:52.356 9.833 - 9.891: 98.8293% ( 2) 00:17:52.356 9.891 - 9.949: 98.8476% ( 2) 00:17:52.356 9.949 - 10.007: 98.8568% ( 1) 00:17:52.356 10.007 - 10.065: 98.8659% ( 1) 00:17:52.356 10.065 - 10.124: 98.8842% ( 2) 00:17:52.356 10.182 - 10.240: 98.8934% ( 1) 00:17:52.356 10.298 - 10.356: 98.9025% ( 1) 00:17:52.356 10.356 - 10.415: 98.9117% ( 1) 00:17:52.356 10.473 - 10.531: 98.9208% ( 1) 00:17:52.356 10.531 - 10.589: 98.9299% ( 1) 00:17:52.356 10.589 - 10.647: 98.9391% ( 1) 00:17:52.356 10.647 - 10.705: 98.9574% ( 2) 00:17:52.356 10.822 - 10.880: 98.9757% ( 2) 00:17:52.356 10.880 - 10.938: 98.9940% ( 2) 00:17:52.356 11.113 - 11.171: 99.0031% ( 1) 00:17:52.356 11.229 - 11.287: 99.0214% ( 2) 00:17:52.356 11.345 - 11.404: 99.0397% ( 2) 00:17:52.356 11.404 - 11.462: 99.0488% ( 1) 00:17:52.356 11.462 - 11.520: 99.0580% ( 1) 00:17:52.356 11.636 - 11.695: 99.0671% ( 1) 00:17:52.356 11.695 - 11.753: 99.0946% ( 3) 00:17:52.356 11.753 - 11.811: 99.1037% ( 1) 00:17:52.356 11.869 - 11.927: 99.1129% ( 1) 00:17:52.356 11.985 - 12.044: 99.1220% ( 1) 00:17:52.356 12.044 - 12.102: 99.1494% ( 3) 00:17:52.356 12.276 - 12.335: 99.1586% ( 1) 00:17:52.356 12.335 - 12.393: 99.1769% ( 2) 00:17:52.356 12.567 - 12.625: 99.1860% ( 1) 00:17:52.356 12.625 - 12.684: 99.1952% ( 1) 00:17:52.356 12.684 - 12.742: 99.2135% ( 2) 00:17:52.356 12.742 - 12.800: 99.2226% ( 1) 00:17:52.356 12.800 - 12.858: 99.2318% ( 1) 00:17:52.356 12.858 - 12.916: 99.2409% ( 1) 00:17:52.356 12.916 - 12.975: 99.2592% ( 2) 00:17:52.356 12.975 - 13.033: 99.2683% ( 1) 00:17:52.356 13.091 - 13.149: 99.2866% ( 2) 00:17:52.356 13.149 - 13.207: 99.2958% ( 1) 00:17:52.356 13.324 - 13.382: 99.3232% ( 3) 00:17:52.356 13.498 - 13.556: 99.3324% ( 1) 00:17:52.356 13.556 - 13.615: 99.3415% ( 1) 00:17:52.356 13.673 - 13.731: 99.3506% ( 1) 00:17:52.356 13.789 - 13.847: 99.3598% ( 1) 00:17:52.356 13.847 - 13.905: 99.3872% ( 3) 00:17:52.356 14.022 - 14.080: 99.3964% ( 1) 00:17:52.356 14.429 - 14.487: 99.4055% ( 1) 00:17:52.356 14.487 - 14.545: 99.4147% ( 1) 00:17:52.356 14.604 - 14.662: 99.4238% ( 1) 00:17:52.356 14.662 - 14.720: 99.4330% ( 1) 00:17:52.356 15.011 - 15.127: 99.4421% ( 1) 00:17:52.356 15.476 - 15.593: 99.4513% ( 1) 00:17:52.356 15.709 - 15.825: 99.4604% ( 1) 00:17:52.356 16.407 - 16.524: 99.4695% ( 1) 00:17:52.356 16.524 - 16.640: 99.4787% ( 1) 00:17:52.356 16.873 - 16.989: 99.4878% ( 1) 00:17:52.356 17.222 - 17.338: 99.4970% ( 1) 00:17:52.356 17.455 - 17.571: 99.5061% ( 1) 00:17:52.356 18.153 - 18.269: 99.5153% ( 1) 00:17:52.356 18.385 - 18.502: 99.5244% ( 1) 00:17:52.356 19.549 - 19.665: 99.5336% ( 1) 00:17:52.356 19.665 - 19.782: 99.5610% ( 3) 00:17:52.356 19.782 - 19.898: 99.5701% ( 1) 00:17:52.356 19.898 - 20.015: 99.5976% ( 3) 00:17:52.356 20.015 - 20.131: 99.6067% ( 1) 00:17:52.356 20.131 - 20.247: 99.6525% ( 5) 00:17:52.356 20.247 - 20.364: 99.6708% ( 2) 00:17:52.356 20.364 - 20.480: 99.6799% ( 1) 00:17:52.356 20.480 - 20.596: 99.6982% ( 2) 00:17:52.356 20.596 - 20.713: 99.7439% ( 5) 00:17:52.356 20.829 - 20.945: 99.7531% ( 1) 00:17:52.356 20.945 - 21.062: 99.7622% ( 1) 00:17:52.356 21.993 - 22.109: 99.7714% ( 1) 00:17:52.356 23.040 - 23.156: 99.7805% ( 1) 00:17:52.356 23.389 - 23.505: 99.7988% ( 2) 00:17:52.356 24.320 - 24.436: 99.8079% ( 1) 00:17:52.356 24.902 - 25.018: 99.8171% ( 1) 00:17:52.356 25.949 - 26.065: 99.8262% ( 1) 00:17:52.356 27.695 - 27.811: 99.8354% ( 1) 00:17:52.356 28.509 - 28.625: 99.8445% ( 1) 00:17:52.356 29.091 - 29.207: 99.8537% ( 1) 00:17:52.356 30.953 - 31.185: 99.8628% ( 1) 00:17:52.356 38.633 - 38.865: 99.8720% ( 1) 00:17:52.356 44.916 - 45.149: 99.8811% ( 1) 00:17:52.356 53.062 - 53.295: 99.8903% ( 1) 00:17:52.356 3991.738 - 4021.527: 99.9909% ( 11) 00:17:52.356 4021.527 - 4051.316: 100.0000% ( 1) 00:17:52.356 00:17:52.356 Complete histogram 00:17:52.356 ================== 00:17:52.356 Range in us Cumulative Count 00:17:52.356 2.109 - 2.124: 0.0274% ( 3) 00:17:52.356 2.124 - 2.138: 0.9054% ( 96) 00:17:52.356 2.138 - 2.153: 9.1275% ( 899) 00:17:52.356 2.153 - 2.167: 26.8154% ( 1934) 00:17:52.356 2.167 - 2.182: 48.5733% ( 2379) 00:17:52.356 2.182 - 2.196: 66.3984% ( 1949) 00:17:52.356 2.196 - 2.211: 76.4222% ( 1096) 00:17:52.356 2.211 - 2.225: 79.8518% ( 375) 00:17:52.356 2.225 - 2.240: 81.4615% ( 176) 00:17:52.356 2.240 - 2.255: 82.2389% ( 85) 00:17:52.356 2.255 - 2.269: 83.1535% ( 100) 00:17:52.356 2.269 - 2.284: 84.6991% ( 169) 00:17:52.356 2.284 - 2.298: 86.7295% ( 222) 00:17:52.356 2.298 - 2.313: 88.5312% ( 197) 00:17:52.356 2.313 - 2.327: 89.9488% ( 155) 00:17:52.357 2.327 - 2.342: 90.7902% ( 92) 00:17:52.357 2.342 - 2.356: 91.6408% ( 93) 00:17:52.357 2.356 - 2.371: 92.3724% ( 80) 00:17:52.357 2.371 - 2.385: 93.0218% ( 71) 00:17:52.357 2.385 - 2.400: 93.4791% ( 50) 00:17:52.357 2.400 - 2.415: 94.0004% ( 57) 00:17:52.357 2.415 - 2.429: 94.5217% ( 57) 00:17:52.357 2.429 - 2.444: 94.9607% ( 48) 00:17:52.357 2.444 - 2.458: 95.1985% ( 26) 00:17:52.357 2.458 - 2.473: 95.4454% ( 27) 00:17:52.357 2.473 - 2.487: 95.7198% ( 30) 00:17:52.357 2.487 - 2.502: 96.0948% ( 41) 00:17:52.357 2.502 - 2.516: 96.5155% ( 46) 00:17:52.357 2.516 - 2.531: 96.7167% ( 22) 00:17:52.357 2.531 - 2.545: 96.9819% ( 29) 00:17:52.357 2.545 - 2.560: 97.1282% ( 16) 00:17:52.357 2.560 - 2.575: 97.3111% ( 20) 00:17:52.357 2.575 - 2.589: 97.4026% ( 10) 00:17:52.357 2.589 - 2.604: 97.5123% ( 12) 00:17:52.357 2.604 - 2.618: 97.5947% ( 9) 00:17:52.357 2.618 - 2.633: 97.6953% ( 11) 00:17:52.357 2.633 - 2.647: 97.7776% ( 9) 00:17:52.357 2.647 - 2.662: 97.8416% ( 7) 00:17:52.357 2.662 - 2.676: 97.9056% ( 7) 00:17:52.357 2.676 - 2.691: 97.9605% ( 6) 00:17:52.357 2.691 - 2.705: 98.0062% ( 5) 00:17:52.357 2.705 - 2.720: 98.0794% ( 8) 00:17:52.357 2.720 - 2.735: 98.0977% ( 2) 00:17:52.357 2.735 - 2.749: 98.1160% ( 2) 00:17:52.357 2.749 - 2.764: 98.1343% ( 2) 00:17:52.357 2.764 - 2.778: 98.1434% ( 1) 00:17:52.357 2.778 - 2.793: 98.1526% ( 1) 00:17:52.357 2.836 - 2.851: 98.1617% ( 1) 00:17:52.357 2.851 - 2.865: 98.1800% ( 2) 00:17:52.357 2.865 - 2.880: 98.1891% ( 1) 00:17:52.357 2.895 - 2.909: 98.2074% ( 2) 00:17:52.357 2.909 - 2.924: 98.2257% ( 2) 00:17:52.357 2.924 - 2.938: 98.2349% ( 1) 00:17:52.357 2.938 - 2.953: 98.2440% ( 1) 00:17:52.357 2.953 - 2.967: 98.2623% ( 2) 00:17:52.357 2.967 - 2.982: 98.2897% ( 3) 00:17:52.357 2.982 - 2.996: 98.3080% ( 2) 00:17:52.357 2.996 - 3.011: 98.3172% ( 1) 00:17:52.357 3.040 - 3.055: 98.3263% ( 1) 00:17:52.357 3.098 - 3.113: 98.3355% ( 1) 00:17:52.357 3.113 - 3.127: 98.3446% ( 1) 00:17:52.357 3.156 - 3.171: 98.3538% ( 1) 00:17:52.357 3.811 - 3.840: 98.3629% ( 1) 00:17:52.357 3.898 - 3.927: 98.3721% ( 1) 00:17:52.357 3.956 - 3.985: 98.3903% ( 2) 00:17:52.357 4.044 - 4.073: 98.3995% ( 1) 00:17:52.357 4.189 - 4.218: 98.4086% ( 1) 00:17:52.357 4.335 - 4.364: 98.4269% ( 2) 00:17:52.357 4.422 - 4.451: 98.4361% ( 1) 00:17:52.357 4.451 - 4.480: 98.4452% ( 1) 00:17:52.357 4.625 - 4.655: 98.4544% ( 1) 00:17:52.357 4.655 - 4.684: 98.4635% ( 1) 00:17:52.357 4.684 - 4.713: 98.4727% ( 1) 00:17:52.357 4.742 - 4.771: 98.4818% ( 1) 00:17:52.357 4.800 - 4.829: 98.5092% ( 3) 00:17:52.357 4.829 - 4.858: 98.5184% ( 1) 00:17:52.357 4.858 - 4.887: 98.5275% ( 1) 00:17:52.357 4.945 - 4.975: 98.5367% ( 1) 00:17:52.357 4.975 - 5.004: 98.5550% ( 2) 00:17:52.357 5.033 - 5.062: 98.5641% ( 1) 00:17:52.357 5.091 - 5.120: 98.5733% ( 1) 00:17:52.357 5.847 - 5.876: 98.5824% ( 1) 00:17:52.357 5.905 - 5.935: 98.5915% ( 1) 00:17:52.357 5.993 - 6.022: 98.6007% ( 1) 00:17:52.357 6.051 - 6.080: 98.6281% ( 3) 00:17:52.357 6.109 - 6.138: 98.6373% ( 1) 00:17:52.357 6.167 - 6.196: 98.6464% ( 1) 00:17:52.357 6.400 - 6.429: 98.6556% ( 1) 00:17:52.357 6.458 - 6.487: 98.6647% ( 1) 00:17:52.357 6.516 - 6.545: 98.6739% ( 1) 00:17:52.357 6.604 - 6.633: 98.6830% ( 1) 00:17:52.357 6.895 - 6.924: 98.6922% ( 1) 00:17:52.357 7.331 - 7.360: 98.7013% ( 1) 00:17:52.357 7.738 - 7.796: 98.7104% ( 1) 00:17:52.357 7.855 - 7.913: 98.7196% ( 1) 00:17:52.357 9.309 - 9.367: 98.7287% ( 1) 00:17:52.357 9.367 - 9.425: 98.7379% ( 1) 00:17:52.357 9.833 - 9.891: 98.7470% ( 1) 00:17:52.357 9.949 - 10.007: 98.7562% ( 1) 00:17:52.357 10.065 - 10.124: 98.7653% ( 1) 00:17:52.357 10.356 - 10.415: 98.7745% ( 1) 00:17:52.357 10.531 - 10.589: 98.7836% ( 1) 00:17:52.357 10.589 - 10.647: 98.7928% ( 1) 00:17:52.357 10.647 - 10.705: 98.8110% ( 2) 00:17:52.357 10.764 - 10.822: 98.8202% ( 1) 00:17:52.357 10.938 - 10.996: 98.8385% ( 2) 00:17:52.357 11.345 - 11.404: 98.8476% ( 1) 00:17:52.357 11.462 - 11.520: 98.8568% ( 1) 00:17:52.357 11.985 - 12.044: 98.8751% ( 2) 00:17:52.357 12.044 - 12.102: 98.8842% ( 1) 00:17:52.357 12.218 - 12.276: 98.8934% ( 1) 00:17:52.357 12.393 - 12.451: 98.9025% ( 1) 00:17:52.357 12.509 - 12.567: 98.9208% ( 2) 00:17:52.357 12.567 - 12.625: 98.9299% ( 1) 00:17:52.357 12.800 - 12.858: 98.9391%[2024-12-05 09:49:39.910846] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:52.617 ( 1) 00:17:52.617 12.916 - 12.975: 98.9482% ( 1) 00:17:52.617 13.033 - 13.091: 98.9665% ( 2) 00:17:52.617 13.207 - 13.265: 98.9757% ( 1) 00:17:52.617 13.556 - 13.615: 98.9940% ( 2) 00:17:52.617 13.789 - 13.847: 99.0031% ( 1) 00:17:52.617 13.905 - 13.964: 99.0123% ( 1) 00:17:52.617 13.964 - 14.022: 99.0214% ( 1) 00:17:52.617 14.022 - 14.080: 99.0305% ( 1) 00:17:52.617 14.487 - 14.545: 99.0397% ( 1) 00:17:52.617 14.604 - 14.662: 99.0580% ( 2) 00:17:52.617 14.720 - 14.778: 99.0671% ( 1) 00:17:52.617 15.709 - 15.825: 99.0763% ( 1) 00:17:52.617 17.920 - 18.036: 99.0854% ( 1) 00:17:52.617 18.153 - 18.269: 99.1129% ( 3) 00:17:52.617 18.269 - 18.385: 99.1677% ( 6) 00:17:52.617 18.385 - 18.502: 99.1952% ( 3) 00:17:52.617 18.735 - 18.851: 99.2043% ( 1) 00:17:52.617 20.829 - 20.945: 99.2135% ( 1) 00:17:52.617 21.760 - 21.876: 99.2226% ( 1) 00:17:52.617 24.087 - 24.204: 99.2318% ( 1) 00:17:52.617 26.996 - 27.113: 99.2409% ( 1) 00:17:52.617 27.578 - 27.695: 99.2500% ( 1) 00:17:52.617 30.720 - 30.953: 99.2592% ( 1) 00:17:52.617 30.953 - 31.185: 99.2683% ( 1) 00:17:52.617 1213.905 - 1221.353: 99.2775% ( 1) 00:17:52.617 3038.487 - 3053.382: 99.2866% ( 1) 00:17:52.617 3068.276 - 3083.171: 99.2958% ( 1) 00:17:52.617 3961.949 - 3991.738: 99.3324% ( 4) 00:17:52.617 3991.738 - 4021.527: 99.8537% ( 57) 00:17:52.617 4021.527 - 4051.316: 99.9726% ( 13) 00:17:52.617 4110.895 - 4140.684: 99.9817% ( 1) 00:17:52.617 5987.607 - 6017.396: 99.9909% ( 1) 00:17:52.617 7030.225 - 7060.015: 100.0000% ( 1) 00:17:52.617 00:17:52.617 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:52.617 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:52.618 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:52.618 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:52.618 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:52.877 [ 00:17:52.877 { 00:17:52.877 "allow_any_host": true, 00:17:52.877 "hosts": [], 00:17:52.877 "listen_addresses": [], 00:17:52.877 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:52.877 "subtype": "Discovery" 00:17:52.877 }, 00:17:52.877 { 00:17:52.877 "allow_any_host": true, 00:17:52.877 "hosts": [], 00:17:52.877 "listen_addresses": [ 00:17:52.877 { 00:17:52.877 "adrfam": "IPv4", 00:17:52.877 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:52.877 "trsvcid": "0", 00:17:52.877 "trtype": "VFIOUSER" 00:17:52.877 } 00:17:52.877 ], 00:17:52.877 "max_cntlid": 65519, 00:17:52.877 "max_namespaces": 32, 00:17:52.877 "min_cntlid": 1, 00:17:52.877 "model_number": "SPDK bdev Controller", 00:17:52.877 "namespaces": [ 00:17:52.877 { 00:17:52.877 "bdev_name": "Malloc1", 00:17:52.877 "name": "Malloc1", 00:17:52.877 "nguid": "3CC96ACCFB0C4D6786159BB96373816A", 00:17:52.877 "nsid": 1, 00:17:52.877 "uuid": "3cc96acc-fb0c-4d67-8615-9bb96373816a" 00:17:52.877 } 00:17:52.877 ], 00:17:52.877 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:52.877 "serial_number": "SPDK1", 00:17:52.877 "subtype": "NVMe" 00:17:52.877 }, 00:17:52.877 { 00:17:52.877 "allow_any_host": true, 00:17:52.877 "hosts": [], 00:17:52.877 "listen_addresses": [ 00:17:52.877 { 00:17:52.877 "adrfam": "IPv4", 00:17:52.877 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:52.877 "trsvcid": "0", 00:17:52.877 "trtype": "VFIOUSER" 00:17:52.877 } 00:17:52.877 ], 00:17:52.877 "max_cntlid": 65519, 00:17:52.877 "max_namespaces": 32, 00:17:52.877 "min_cntlid": 1, 00:17:52.877 "model_number": "SPDK bdev Controller", 00:17:52.877 "namespaces": [ 00:17:52.877 { 00:17:52.877 "bdev_name": "Malloc2", 00:17:52.877 "name": "Malloc2", 00:17:52.877 "nguid": "E264DCFC89BB4B37AA7EBECB434308A9", 00:17:52.877 "nsid": 1, 00:17:52.877 "uuid": "e264dcfc-89bb-4b37-aa7e-becb434308a9" 00:17:52.877 } 00:17:52.877 ], 00:17:52.877 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:52.877 "serial_number": "SPDK2", 00:17:52.877 "subtype": "NVMe" 00:17:52.877 } 00:17:52.877 ] 00:17:52.877 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:52.877 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=82053 00:17:52.877 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:52.877 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:52.877 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:17:52.877 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:52.877 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:17:52.877 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:17:52.877 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:52.877 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:52.877 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:17:52.877 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:17:52.877 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:53.137 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:53.137 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:17:53.137 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=3 00:17:53.137 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:53.137 [2024-12-05 09:49:40.616816] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:53.137 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:53.137 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:53.137 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:17:53.137 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:53.137 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:53.730 Malloc3 00:17:53.730 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:53.730 [2024-12-05 09:49:41.320269] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:53.730 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:53.989 Asynchronous Event Request test 00:17:53.989 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:53.989 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:53.989 Registering asynchronous event callbacks... 00:17:53.989 Starting namespace attribute notice tests for all controllers... 00:17:53.989 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:53.989 aer_cb - Changed Namespace 00:17:53.989 Cleaning up... 00:17:54.248 [ 00:17:54.248 { 00:17:54.248 "allow_any_host": true, 00:17:54.248 "hosts": [], 00:17:54.248 "listen_addresses": [], 00:17:54.248 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:54.248 "subtype": "Discovery" 00:17:54.248 }, 00:17:54.248 { 00:17:54.248 "allow_any_host": true, 00:17:54.248 "hosts": [], 00:17:54.248 "listen_addresses": [ 00:17:54.248 { 00:17:54.248 "adrfam": "IPv4", 00:17:54.248 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:54.248 "trsvcid": "0", 00:17:54.248 "trtype": "VFIOUSER" 00:17:54.248 } 00:17:54.248 ], 00:17:54.248 "max_cntlid": 65519, 00:17:54.248 "max_namespaces": 32, 00:17:54.248 "min_cntlid": 1, 00:17:54.248 "model_number": "SPDK bdev Controller", 00:17:54.248 "namespaces": [ 00:17:54.248 { 00:17:54.248 "bdev_name": "Malloc1", 00:17:54.248 "name": "Malloc1", 00:17:54.248 "nguid": "3CC96ACCFB0C4D6786159BB96373816A", 00:17:54.248 "nsid": 1, 00:17:54.248 "uuid": "3cc96acc-fb0c-4d67-8615-9bb96373816a" 00:17:54.248 }, 00:17:54.248 { 00:17:54.248 "bdev_name": "Malloc3", 00:17:54.248 "name": "Malloc3", 00:17:54.248 "nguid": "1AFE7D72651043BFB5668D1959C6F591", 00:17:54.248 "nsid": 2, 00:17:54.248 "uuid": "1afe7d72-6510-43bf-b566-8d1959c6f591" 00:17:54.248 } 00:17:54.248 ], 00:17:54.248 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:54.248 "serial_number": "SPDK1", 00:17:54.248 "subtype": "NVMe" 00:17:54.248 }, 00:17:54.248 { 00:17:54.248 "allow_any_host": true, 00:17:54.248 "hosts": [], 00:17:54.248 "listen_addresses": [ 00:17:54.248 { 00:17:54.248 "adrfam": "IPv4", 00:17:54.248 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:54.248 "trsvcid": "0", 00:17:54.248 "trtype": "VFIOUSER" 00:17:54.248 } 00:17:54.248 ], 00:17:54.248 "max_cntlid": 65519, 00:17:54.248 "max_namespaces": 32, 00:17:54.248 "min_cntlid": 1, 00:17:54.248 "model_number": "SPDK bdev Controller", 00:17:54.248 "namespaces": [ 00:17:54.248 { 00:17:54.248 "bdev_name": "Malloc2", 00:17:54.248 "name": "Malloc2", 00:17:54.248 "nguid": "E264DCFC89BB4B37AA7EBECB434308A9", 00:17:54.248 "nsid": 1, 00:17:54.248 "uuid": "e264dcfc-89bb-4b37-aa7e-becb434308a9" 00:17:54.248 } 00:17:54.248 ], 00:17:54.248 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:54.248 "serial_number": "SPDK2", 00:17:54.248 "subtype": "NVMe" 00:17:54.248 } 00:17:54.248 ] 00:17:54.248 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 82053 00:17:54.248 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:54.248 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:54.248 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:54.248 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:54.248 [2024-12-05 09:49:41.690582] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:54.248 [2024-12-05 09:49:41.690696] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82093 ] 00:17:54.248 [2024-12-05 09:49:41.863781] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:54.248 [2024-12-05 09:49:41.870519] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:54.248 [2024-12-05 09:49:41.870573] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff13d5b1000 00:17:54.248 [2024-12-05 09:49:41.871502] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:54.248 [2024-12-05 09:49:41.872500] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:54.248 [2024-12-05 09:49:41.873523] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:54.248 [2024-12-05 09:49:41.874524] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:54.248 [2024-12-05 09:49:41.875534] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:54.509 [2024-12-05 09:49:41.876534] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:54.509 [2024-12-05 09:49:41.877540] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:54.509 [2024-12-05 09:49:41.878576] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:54.509 [2024-12-05 09:49:41.879571] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:54.509 [2024-12-05 09:49:41.879615] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff13d5a6000 00:17:54.509 [2024-12-05 09:49:41.880889] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:54.509 [2024-12-05 09:49:41.893734] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:54.509 [2024-12-05 09:49:41.893793] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:17:54.509 [2024-12-05 09:49:41.898998] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:54.509 [2024-12-05 09:49:41.899129] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:54.509 [2024-12-05 09:49:41.899646] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:17:54.509 [2024-12-05 09:49:41.899683] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:17:54.509 [2024-12-05 09:49:41.899695] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:17:54.509 [2024-12-05 09:49:41.899991] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:54.509 [2024-12-05 09:49:41.900026] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:17:54.509 [2024-12-05 09:49:41.900046] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:17:54.509 [2024-12-05 09:49:41.901010] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:54.509 [2024-12-05 09:49:41.901047] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:17:54.509 [2024-12-05 09:49:41.901065] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:54.509 [2024-12-05 09:49:41.904958] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:54.509 [2024-12-05 09:49:41.904991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:54.509 [2024-12-05 09:49:41.905092] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:54.509 [2024-12-05 09:49:41.905113] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:54.509 [2024-12-05 09:49:41.905125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:54.509 [2024-12-05 09:49:41.905141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:54.509 [2024-12-05 09:49:41.905254] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:17:54.509 [2024-12-05 09:49:41.905263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:54.509 [2024-12-05 09:49:41.905274] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:54.510 [2024-12-05 09:49:41.906099] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:54.510 [2024-12-05 09:49:41.907102] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:54.510 [2024-12-05 09:49:41.908101] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:54.510 [2024-12-05 09:49:41.909093] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:54.510 [2024-12-05 09:49:41.909218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:54.510 [2024-12-05 09:49:41.910114] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:54.510 [2024-12-05 09:49:41.910164] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:54.510 [2024-12-05 09:49:41.910186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.910222] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:17:54.510 [2024-12-05 09:49:41.910240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.910265] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:54.510 [2024-12-05 09:49:41.910275] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:54.510 [2024-12-05 09:49:41.910292] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:54.510 [2024-12-05 09:49:41.910314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:54.510 [2024-12-05 09:49:41.915961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:54.510 [2024-12-05 09:49:41.916007] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:17:54.510 [2024-12-05 09:49:41.916021] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:17:54.510 [2024-12-05 09:49:41.916029] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:17:54.510 [2024-12-05 09:49:41.916039] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:54.510 [2024-12-05 09:49:41.916048] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:17:54.510 [2024-12-05 09:49:41.916058] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:17:54.510 [2024-12-05 09:49:41.916066] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.916095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.916115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:54.510 [2024-12-05 09:49:41.923957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:54.510 [2024-12-05 09:49:41.924008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:54.510 [2024-12-05 09:49:41.924027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:54.510 [2024-12-05 09:49:41.924044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:54.510 [2024-12-05 09:49:41.924058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:54.510 [2024-12-05 09:49:41.924067] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.924085] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.924100] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:54.510 [2024-12-05 09:49:41.931958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:54.510 [2024-12-05 09:49:41.931984] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:17:54.510 [2024-12-05 09:49:41.932000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.932013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.932026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.932041] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:54.510 [2024-12-05 09:49:41.939959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:54.510 [2024-12-05 09:49:41.940071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.940102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.940119] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:54.510 [2024-12-05 09:49:41.940131] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:54.510 [2024-12-05 09:49:41.940138] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:54.510 [2024-12-05 09:49:41.940152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:54.510 [2024-12-05 09:49:41.947968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:54.510 [2024-12-05 09:49:41.948032] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:17:54.510 [2024-12-05 09:49:41.948055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.948098] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.948120] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:54.510 [2024-12-05 09:49:41.948131] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:54.510 [2024-12-05 09:49:41.948138] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:54.510 [2024-12-05 09:49:41.948155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:54.510 [2024-12-05 09:49:41.955952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:54.510 [2024-12-05 09:49:41.956023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.956047] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.956081] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:54.510 [2024-12-05 09:49:41.956091] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:54.510 [2024-12-05 09:49:41.956099] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:54.510 [2024-12-05 09:49:41.956114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:54.510 [2024-12-05 09:49:41.963959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:54.510 [2024-12-05 09:49:41.964019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.964038] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.964054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.964067] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.964076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.964092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.964100] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:54.510 [2024-12-05 09:49:41.964110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:17:54.510 [2024-12-05 09:49:41.964119] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:17:54.510 [2024-12-05 09:49:41.964166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:54.510 [2024-12-05 09:49:41.971951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:54.510 [2024-12-05 09:49:41.972001] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:54.510 [2024-12-05 09:49:41.979956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:54.510 [2024-12-05 09:49:41.979999] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:54.510 [2024-12-05 09:49:41.987961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:54.510 [2024-12-05 09:49:41.988002] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:54.510 [2024-12-05 09:49:41.995949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:54.510 [2024-12-05 09:49:41.996011] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:54.510 [2024-12-05 09:49:41.996025] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:54.511 [2024-12-05 09:49:41.996037] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:54.511 [2024-12-05 09:49:41.996043] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:54.511 [2024-12-05 09:49:41.996052] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:54.511 [2024-12-05 09:49:41.996063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:54.511 [2024-12-05 09:49:41.996081] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:54.511 [2024-12-05 09:49:41.996092] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:54.511 [2024-12-05 09:49:41.996103] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:54.511 [2024-12-05 09:49:41.996113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:54.511 [2024-12-05 09:49:41.996128] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:54.511 [2024-12-05 09:49:41.996136] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:54.511 [2024-12-05 09:49:41.996147] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:54.511 [2024-12-05 09:49:41.996156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:54.511 [2024-12-05 09:49:41.996180] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:54.511 [2024-12-05 09:49:41.996188] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:54.511 [2024-12-05 09:49:41.996196] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:54.511 [2024-12-05 09:49:41.996209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:54.511 [2024-12-05 09:49:42.003954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:54.511 [2024-12-05 09:49:42.004010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:54.511 [2024-12-05 09:49:42.004032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:54.511 [2024-12-05 09:49:42.004044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:54.511 ===================================================== 00:17:54.511 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:54.511 ===================================================== 00:17:54.511 Controller Capabilities/Features 00:17:54.511 ================================ 00:17:54.511 Vendor ID: 4e58 00:17:54.511 Subsystem Vendor ID: 4e58 00:17:54.511 Serial Number: SPDK2 00:17:54.511 Model Number: SPDK bdev Controller 00:17:54.511 Firmware Version: 25.01 00:17:54.511 Recommended Arb Burst: 6 00:17:54.511 IEEE OUI Identifier: 8d 6b 50 00:17:54.511 Multi-path I/O 00:17:54.511 May have multiple subsystem ports: Yes 00:17:54.511 May have multiple controllers: Yes 00:17:54.511 Associated with SR-IOV VF: No 00:17:54.511 Max Data Transfer Size: 131072 00:17:54.511 Max Number of Namespaces: 32 00:17:54.511 Max Number of I/O Queues: 127 00:17:54.511 NVMe Specification Version (VS): 1.3 00:17:54.511 NVMe Specification Version (Identify): 1.3 00:17:54.511 Maximum Queue Entries: 256 00:17:54.511 Contiguous Queues Required: Yes 00:17:54.511 Arbitration Mechanisms Supported 00:17:54.511 Weighted Round Robin: Not Supported 00:17:54.511 Vendor Specific: Not Supported 00:17:54.511 Reset Timeout: 15000 ms 00:17:54.511 Doorbell Stride: 4 bytes 00:17:54.511 NVM Subsystem Reset: Not Supported 00:17:54.511 Command Sets Supported 00:17:54.511 NVM Command Set: Supported 00:17:54.511 Boot Partition: Not Supported 00:17:54.511 Memory Page Size Minimum: 4096 bytes 00:17:54.511 Memory Page Size Maximum: 4096 bytes 00:17:54.511 Persistent Memory Region: Not Supported 00:17:54.511 Optional Asynchronous Events Supported 00:17:54.511 Namespace Attribute Notices: Supported 00:17:54.511 Firmware Activation Notices: Not Supported 00:17:54.511 ANA Change Notices: Not Supported 00:17:54.511 PLE Aggregate Log Change Notices: Not Supported 00:17:54.511 LBA Status Info Alert Notices: Not Supported 00:17:54.511 EGE Aggregate Log Change Notices: Not Supported 00:17:54.511 Normal NVM Subsystem Shutdown event: Not Supported 00:17:54.511 Zone Descriptor Change Notices: Not Supported 00:17:54.511 Discovery Log Change Notices: Not Supported 00:17:54.511 Controller Attributes 00:17:54.511 128-bit Host Identifier: Supported 00:17:54.511 Non-Operational Permissive Mode: Not Supported 00:17:54.511 NVM Sets: Not Supported 00:17:54.511 Read Recovery Levels: Not Supported 00:17:54.511 Endurance Groups: Not Supported 00:17:54.511 Predictable Latency Mode: Not Supported 00:17:54.511 Traffic Based Keep ALive: Not Supported 00:17:54.511 Namespace Granularity: Not Supported 00:17:54.511 SQ Associations: Not Supported 00:17:54.511 UUID List: Not Supported 00:17:54.511 Multi-Domain Subsystem: Not Supported 00:17:54.511 Fixed Capacity Management: Not Supported 00:17:54.511 Variable Capacity Management: Not Supported 00:17:54.511 Delete Endurance Group: Not Supported 00:17:54.511 Delete NVM Set: Not Supported 00:17:54.511 Extended LBA Formats Supported: Not Supported 00:17:54.511 Flexible Data Placement Supported: Not Supported 00:17:54.511 00:17:54.511 Controller Memory Buffer Support 00:17:54.511 ================================ 00:17:54.511 Supported: No 00:17:54.511 00:17:54.511 Persistent Memory Region Support 00:17:54.511 ================================ 00:17:54.511 Supported: No 00:17:54.511 00:17:54.511 Admin Command Set Attributes 00:17:54.511 ============================ 00:17:54.511 Security Send/Receive: Not Supported 00:17:54.511 Format NVM: Not Supported 00:17:54.511 Firmware Activate/Download: Not Supported 00:17:54.511 Namespace Management: Not Supported 00:17:54.511 Device Self-Test: Not Supported 00:17:54.511 Directives: Not Supported 00:17:54.511 NVMe-MI: Not Supported 00:17:54.511 Virtualization Management: Not Supported 00:17:54.511 Doorbell Buffer Config: Not Supported 00:17:54.511 Get LBA Status Capability: Not Supported 00:17:54.511 Command & Feature Lockdown Capability: Not Supported 00:17:54.511 Abort Command Limit: 4 00:17:54.511 Async Event Request Limit: 4 00:17:54.511 Number of Firmware Slots: N/A 00:17:54.511 Firmware Slot 1 Read-Only: N/A 00:17:54.511 Firmware Activation Without Reset: N/A 00:17:54.511 Multiple Update Detection Support: N/A 00:17:54.511 Firmware Update Granularity: No Information Provided 00:17:54.511 Per-Namespace SMART Log: No 00:17:54.511 Asymmetric Namespace Access Log Page: Not Supported 00:17:54.511 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:54.511 Command Effects Log Page: Supported 00:17:54.511 Get Log Page Extended Data: Supported 00:17:54.511 Telemetry Log Pages: Not Supported 00:17:54.511 Persistent Event Log Pages: Not Supported 00:17:54.511 Supported Log Pages Log Page: May Support 00:17:54.511 Commands Supported & Effects Log Page: Not Supported 00:17:54.511 Feature Identifiers & Effects Log Page:May Support 00:17:54.511 NVMe-MI Commands & Effects Log Page: May Support 00:17:54.511 Data Area 4 for Telemetry Log: Not Supported 00:17:54.511 Error Log Page Entries Supported: 128 00:17:54.511 Keep Alive: Supported 00:17:54.511 Keep Alive Granularity: 10000 ms 00:17:54.511 00:17:54.511 NVM Command Set Attributes 00:17:54.511 ========================== 00:17:54.511 Submission Queue Entry Size 00:17:54.511 Max: 64 00:17:54.511 Min: 64 00:17:54.511 Completion Queue Entry Size 00:17:54.511 Max: 16 00:17:54.511 Min: 16 00:17:54.511 Number of Namespaces: 32 00:17:54.511 Compare Command: Supported 00:17:54.511 Write Uncorrectable Command: Not Supported 00:17:54.511 Dataset Management Command: Supported 00:17:54.511 Write Zeroes Command: Supported 00:17:54.511 Set Features Save Field: Not Supported 00:17:54.511 Reservations: Not Supported 00:17:54.511 Timestamp: Not Supported 00:17:54.511 Copy: Supported 00:17:54.511 Volatile Write Cache: Present 00:17:54.511 Atomic Write Unit (Normal): 1 00:17:54.511 Atomic Write Unit (PFail): 1 00:17:54.511 Atomic Compare & Write Unit: 1 00:17:54.511 Fused Compare & Write: Supported 00:17:54.511 Scatter-Gather List 00:17:54.511 SGL Command Set: Supported (Dword aligned) 00:17:54.511 SGL Keyed: Not Supported 00:17:54.511 SGL Bit Bucket Descriptor: Not Supported 00:17:54.511 SGL Metadata Pointer: Not Supported 00:17:54.511 Oversized SGL: Not Supported 00:17:54.511 SGL Metadata Address: Not Supported 00:17:54.511 SGL Offset: Not Supported 00:17:54.511 Transport SGL Data Block: Not Supported 00:17:54.511 Replay Protected Memory Block: Not Supported 00:17:54.511 00:17:54.511 Firmware Slot Information 00:17:54.511 ========================= 00:17:54.511 Active slot: 1 00:17:54.511 Slot 1 Firmware Revision: 25.01 00:17:54.511 00:17:54.511 00:17:54.511 Commands Supported and Effects 00:17:54.511 ============================== 00:17:54.511 Admin Commands 00:17:54.511 -------------- 00:17:54.511 Get Log Page (02h): Supported 00:17:54.511 Identify (06h): Supported 00:17:54.511 Abort (08h): Supported 00:17:54.512 Set Features (09h): Supported 00:17:54.512 Get Features (0Ah): Supported 00:17:54.512 Asynchronous Event Request (0Ch): Supported 00:17:54.512 Keep Alive (18h): Supported 00:17:54.512 I/O Commands 00:17:54.512 ------------ 00:17:54.512 Flush (00h): Supported LBA-Change 00:17:54.512 Write (01h): Supported LBA-Change 00:17:54.512 Read (02h): Supported 00:17:54.512 Compare (05h): Supported 00:17:54.512 Write Zeroes (08h): Supported LBA-Change 00:17:54.512 Dataset Management (09h): Supported LBA-Change 00:17:54.512 Copy (19h): Supported LBA-Change 00:17:54.512 00:17:54.512 Error Log 00:17:54.512 ========= 00:17:54.512 00:17:54.512 Arbitration 00:17:54.512 =========== 00:17:54.512 Arbitration Burst: 1 00:17:54.512 00:17:54.512 Power Management 00:17:54.512 ================ 00:17:54.512 Number of Power States: 1 00:17:54.512 Current Power State: Power State #0 00:17:54.512 Power State #0: 00:17:54.512 Max Power: 0.00 W 00:17:54.512 Non-Operational State: Operational 00:17:54.512 Entry Latency: Not Reported 00:17:54.512 Exit Latency: Not Reported 00:17:54.512 Relative Read Throughput: 0 00:17:54.512 Relative Read Latency: 0 00:17:54.512 Relative Write Throughput: 0 00:17:54.512 Relative Write Latency: 0 00:17:54.512 Idle Power: Not Reported 00:17:54.512 Active Power: Not Reported 00:17:54.512 Non-Operational Permissive Mode: Not Supported 00:17:54.512 00:17:54.512 Health Information 00:17:54.512 ================== 00:17:54.512 Critical Warnings: 00:17:54.512 Available Spare Space: OK 00:17:54.512 Temperature: OK 00:17:54.512 Device Reliability: OK 00:17:54.512 Read Only: No 00:17:54.512 Volatile Memory Backup: OK 00:17:54.512 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:54.512 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:54.512 Available Spare: 0% 00:17:54.512 Available Sp[2024-12-05 09:49:42.004212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:54.512 [2024-12-05 09:49:42.011954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:54.512 [2024-12-05 09:49:42.012060] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:17:54.512 [2024-12-05 09:49:42.012082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.512 [2024-12-05 09:49:42.012096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.512 [2024-12-05 09:49:42.012106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.512 [2024-12-05 09:49:42.012118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:54.512 [2024-12-05 09:49:42.012241] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:54.512 [2024-12-05 09:49:42.012272] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:54.512 [2024-12-05 09:49:42.013238] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:54.512 [2024-12-05 09:49:42.013352] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:17:54.512 [2024-12-05 09:49:42.013373] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:17:54.512 [2024-12-05 09:49:42.014259] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:54.512 [2024-12-05 09:49:42.014319] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:17:54.512 [2024-12-05 09:49:42.014845] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:54.512 [2024-12-05 09:49:42.017972] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:54.512 are Threshold: 0% 00:17:54.512 Life Percentage Used: 0% 00:17:54.512 Data Units Read: 0 00:17:54.512 Data Units Written: 0 00:17:54.512 Host Read Commands: 0 00:17:54.512 Host Write Commands: 0 00:17:54.512 Controller Busy Time: 0 minutes 00:17:54.512 Power Cycles: 0 00:17:54.512 Power On Hours: 0 hours 00:17:54.512 Unsafe Shutdowns: 0 00:17:54.512 Unrecoverable Media Errors: 0 00:17:54.512 Lifetime Error Log Entries: 0 00:17:54.512 Warning Temperature Time: 0 minutes 00:17:54.512 Critical Temperature Time: 0 minutes 00:17:54.512 00:17:54.512 Number of Queues 00:17:54.512 ================ 00:17:54.512 Number of I/O Submission Queues: 127 00:17:54.512 Number of I/O Completion Queues: 127 00:17:54.512 00:17:54.512 Active Namespaces 00:17:54.512 ================= 00:17:54.512 Namespace ID:1 00:17:54.512 Error Recovery Timeout: Unlimited 00:17:54.512 Command Set Identifier: NVM (00h) 00:17:54.512 Deallocate: Supported 00:17:54.512 Deallocated/Unwritten Error: Not Supported 00:17:54.512 Deallocated Read Value: Unknown 00:17:54.512 Deallocate in Write Zeroes: Not Supported 00:17:54.512 Deallocated Guard Field: 0xFFFF 00:17:54.512 Flush: Supported 00:17:54.512 Reservation: Supported 00:17:54.512 Namespace Sharing Capabilities: Multiple Controllers 00:17:54.512 Size (in LBAs): 131072 (0GiB) 00:17:54.512 Capacity (in LBAs): 131072 (0GiB) 00:17:54.512 Utilization (in LBAs): 131072 (0GiB) 00:17:54.512 NGUID: E264DCFC89BB4B37AA7EBECB434308A9 00:17:54.512 UUID: e264dcfc-89bb-4b37-aa7e-becb434308a9 00:17:54.512 Thin Provisioning: Not Supported 00:17:54.512 Per-NS Atomic Units: Yes 00:17:54.512 Atomic Boundary Size (Normal): 0 00:17:54.512 Atomic Boundary Size (PFail): 0 00:17:54.512 Atomic Boundary Offset: 0 00:17:54.512 Maximum Single Source Range Length: 65535 00:17:54.512 Maximum Copy Length: 65535 00:17:54.512 Maximum Source Range Count: 1 00:17:54.512 NGUID/EUI64 Never Reused: No 00:17:54.512 Namespace Write Protected: No 00:17:54.512 Number of LBA Formats: 1 00:17:54.512 Current LBA Format: LBA Format #00 00:17:54.512 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:54.512 00:17:54.512 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:55.080 [2024-12-05 09:49:42.485241] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:00.368 Initializing NVMe Controllers 00:18:00.368 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:00.368 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:00.368 Initialization complete. Launching workers. 00:18:00.368 ======================================================== 00:18:00.368 Latency(us) 00:18:00.368 Device Information : IOPS MiB/s Average min max 00:18:00.368 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30343.72 118.53 4217.41 1234.53 11346.81 00:18:00.368 ======================================================== 00:18:00.368 Total : 30343.72 118.53 4217.41 1234.53 11346.81 00:18:00.368 00:18:00.368 [2024-12-05 09:49:47.571191] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:00.368 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:00.627 [2024-12-05 09:49:48.038635] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:05.901 Initializing NVMe Controllers 00:18:05.901 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:05.901 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:05.901 Initialization complete. Launching workers. 00:18:05.901 ======================================================== 00:18:05.901 Latency(us) 00:18:05.901 Device Information : IOPS MiB/s Average min max 00:18:05.901 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 29749.58 116.21 4300.75 1228.54 10501.14 00:18:05.901 ======================================================== 00:18:05.901 Total : 29749.58 116.21 4300.75 1228.54 10501.14 00:18:05.901 00:18:05.901 [2024-12-05 09:49:53.053772] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:05.901 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:05.901 [2024-12-05 09:49:53.477730] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:11.169 [2024-12-05 09:49:58.616508] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:11.169 Initializing NVMe Controllers 00:18:11.169 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:11.169 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:11.169 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:11.169 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:11.169 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:11.169 Initialization complete. Launching workers. 00:18:11.169 Starting thread on core 2 00:18:11.169 Starting thread on core 3 00:18:11.169 Starting thread on core 1 00:18:11.169 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:11.767 [2024-12-05 09:49:59.082081] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:15.069 [2024-12-05 09:50:02.260750] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:15.069 Initializing NVMe Controllers 00:18:15.069 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:15.069 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:15.069 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:15.069 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:15.069 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:15.069 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:15.069 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:18:15.069 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:15.069 Initialization complete. Launching workers. 00:18:15.069 Starting thread on core 1 with urgent priority queue 00:18:15.069 Starting thread on core 2 with urgent priority queue 00:18:15.069 Starting thread on core 3 with urgent priority queue 00:18:15.069 Starting thread on core 0 with urgent priority queue 00:18:15.069 SPDK bdev Controller (SPDK2 ) core 0: 1045.33 IO/s 95.66 secs/100000 ios 00:18:15.069 SPDK bdev Controller (SPDK2 ) core 1: 490.67 IO/s 203.80 secs/100000 ios 00:18:15.069 SPDK bdev Controller (SPDK2 ) core 2: 1002.67 IO/s 99.73 secs/100000 ios 00:18:15.069 SPDK bdev Controller (SPDK2 ) core 3: 469.33 IO/s 213.07 secs/100000 ios 00:18:15.069 ======================================================== 00:18:15.069 00:18:15.069 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:15.328 [2024-12-05 09:50:02.738326] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:15.328 Initializing NVMe Controllers 00:18:15.328 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:15.328 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:15.328 Namespace ID: 1 size: 0GB 00:18:15.328 Initialization complete. 00:18:15.328 INFO: using host memory buffer for IO 00:18:15.328 Hello world! 00:18:15.328 [2024-12-05 09:50:02.752119] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:15.328 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:15.587 [2024-12-05 09:50:03.191128] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:16.966 Initializing NVMe Controllers 00:18:16.966 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:16.967 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:16.967 Initialization complete. Launching workers. 00:18:16.967 submit (in ns) avg, min, max = 6619.3, 3808.2, 4034616.4 00:18:16.967 complete (in ns) avg, min, max = 32892.8, 2082.7, 6049476.4 00:18:16.967 00:18:16.967 Submit histogram 00:18:16.967 ================ 00:18:16.967 Range in us Cumulative Count 00:18:16.967 3.782 - 3.811: 0.0185% ( 2) 00:18:16.967 3.811 - 3.840: 0.1568% ( 15) 00:18:16.967 3.840 - 3.869: 1.2177% ( 115) 00:18:16.967 3.869 - 3.898: 3.9022% ( 291) 00:18:16.967 3.898 - 3.927: 7.3893% ( 378) 00:18:16.967 3.927 - 3.956: 10.9317% ( 384) 00:18:16.967 3.956 - 3.985: 15.9779% ( 547) 00:18:16.967 3.985 - 4.015: 23.6255% ( 829) 00:18:16.967 4.015 - 4.044: 32.0941% ( 918) 00:18:16.967 4.044 - 4.073: 41.6513% ( 1036) 00:18:16.967 4.073 - 4.102: 50.2860% ( 936) 00:18:16.967 4.102 - 4.131: 57.9797% ( 834) 00:18:16.967 4.131 - 4.160: 63.3579% ( 583) 00:18:16.967 4.160 - 4.189: 67.7675% ( 478) 00:18:16.967 4.189 - 4.218: 71.1716% ( 369) 00:18:16.967 4.218 - 4.247: 74.0221% ( 309) 00:18:16.967 4.247 - 4.276: 76.6697% ( 287) 00:18:16.967 4.276 - 4.305: 78.8653% ( 238) 00:18:16.967 4.305 - 4.335: 81.0424% ( 236) 00:18:16.967 4.335 - 4.364: 83.0166% ( 214) 00:18:16.967 4.364 - 4.393: 85.1384% ( 230) 00:18:16.967 4.393 - 4.422: 86.9926% ( 201) 00:18:16.967 4.422 - 4.451: 88.7915% ( 195) 00:18:16.967 4.451 - 4.480: 90.3321% ( 167) 00:18:16.967 4.480 - 4.509: 91.5406% ( 131) 00:18:16.967 4.509 - 4.538: 92.5092% ( 105) 00:18:16.967 4.538 - 4.567: 93.1642% ( 71) 00:18:16.967 4.567 - 4.596: 93.7915% ( 68) 00:18:16.967 4.596 - 4.625: 94.2251% ( 47) 00:18:16.967 4.625 - 4.655: 94.5480% ( 35) 00:18:16.967 4.655 - 4.684: 94.8524% ( 33) 00:18:16.967 4.684 - 4.713: 95.1568% ( 33) 00:18:16.967 4.713 - 4.742: 95.3321% ( 19) 00:18:16.967 4.742 - 4.771: 95.4889% ( 17) 00:18:16.967 4.771 - 4.800: 95.5812% ( 10) 00:18:16.967 4.800 - 4.829: 95.6642% ( 9) 00:18:16.967 4.829 - 4.858: 95.7380% ( 8) 00:18:16.967 4.858 - 4.887: 95.7934% ( 6) 00:18:16.967 4.887 - 4.916: 95.8210% ( 3) 00:18:16.967 4.916 - 4.945: 95.8764% ( 6) 00:18:16.967 4.945 - 4.975: 95.9225% ( 5) 00:18:16.967 4.975 - 5.004: 95.9594% ( 4) 00:18:16.967 5.004 - 5.033: 95.9779% ( 2) 00:18:16.967 5.033 - 5.062: 95.9963% ( 2) 00:18:16.967 5.062 - 5.091: 96.0055% ( 1) 00:18:16.967 5.324 - 5.353: 96.0148% ( 1) 00:18:16.967 5.498 - 5.527: 96.0240% ( 1) 00:18:16.967 5.644 - 5.673: 96.0332% ( 1) 00:18:16.967 5.673 - 5.702: 96.0424% ( 1) 00:18:16.967 5.760 - 5.789: 96.0517% ( 1) 00:18:16.967 5.818 - 5.847: 96.0609% ( 1) 00:18:16.967 5.935 - 5.964: 96.0701% ( 1) 00:18:16.967 5.993 - 6.022: 96.0978% ( 3) 00:18:16.967 6.022 - 6.051: 96.1162% ( 2) 00:18:16.967 6.051 - 6.080: 96.1347% ( 2) 00:18:16.967 6.080 - 6.109: 96.1808% ( 5) 00:18:16.967 6.109 - 6.138: 96.2085% ( 3) 00:18:16.967 6.138 - 6.167: 96.2731% ( 7) 00:18:16.967 6.167 - 6.196: 96.3469% ( 8) 00:18:16.967 6.196 - 6.225: 96.4207% ( 8) 00:18:16.967 6.225 - 6.255: 96.4483% ( 3) 00:18:16.967 6.255 - 6.284: 96.5129% ( 7) 00:18:16.967 6.284 - 6.313: 96.5683% ( 6) 00:18:16.967 6.313 - 6.342: 96.6421% ( 8) 00:18:16.967 6.342 - 6.371: 96.6974% ( 6) 00:18:16.967 6.371 - 6.400: 96.7712% ( 8) 00:18:16.967 6.400 - 6.429: 96.8358% ( 7) 00:18:16.967 6.429 - 6.458: 96.8727% ( 4) 00:18:16.967 6.458 - 6.487: 96.9373% ( 7) 00:18:16.967 6.487 - 6.516: 96.9926% ( 6) 00:18:16.967 6.516 - 6.545: 97.0387% ( 5) 00:18:16.967 6.545 - 6.575: 97.1125% ( 8) 00:18:16.967 6.604 - 6.633: 97.1587% ( 5) 00:18:16.967 6.633 - 6.662: 97.2048% ( 5) 00:18:16.967 6.662 - 6.691: 97.2417% ( 4) 00:18:16.967 6.691 - 6.720: 97.2694% ( 3) 00:18:16.967 6.778 - 6.807: 97.2970% ( 3) 00:18:16.967 6.807 - 6.836: 97.3155% ( 2) 00:18:16.967 6.836 - 6.865: 97.3432% ( 3) 00:18:16.967 6.865 - 6.895: 97.3801% ( 4) 00:18:16.967 6.895 - 6.924: 97.4077% ( 3) 00:18:16.967 6.924 - 6.953: 97.4354% ( 3) 00:18:16.967 6.982 - 7.011: 97.4539% ( 2) 00:18:16.967 7.011 - 7.040: 97.4908% ( 4) 00:18:16.967 7.069 - 7.098: 97.5092% ( 2) 00:18:16.967 7.127 - 7.156: 97.5277% ( 2) 00:18:16.967 7.185 - 7.215: 97.5461% ( 2) 00:18:16.967 7.244 - 7.273: 97.5646% ( 2) 00:18:16.967 7.273 - 7.302: 97.5830% ( 2) 00:18:16.967 7.302 - 7.331: 97.5923% ( 1) 00:18:16.967 7.331 - 7.360: 97.6199% ( 3) 00:18:16.967 7.360 - 7.389: 97.6476% ( 3) 00:18:16.967 7.389 - 7.418: 97.6661% ( 2) 00:18:16.967 7.418 - 7.447: 97.6937% ( 3) 00:18:16.967 7.447 - 7.505: 97.7122% ( 2) 00:18:16.967 7.505 - 7.564: 97.7491% ( 4) 00:18:16.967 7.564 - 7.622: 97.7675% ( 2) 00:18:16.967 7.622 - 7.680: 97.7952% ( 3) 00:18:16.967 7.680 - 7.738: 97.8137% ( 2) 00:18:16.967 7.738 - 7.796: 97.8413% ( 3) 00:18:16.967 7.796 - 7.855: 97.8598% ( 2) 00:18:16.967 7.913 - 7.971: 97.8967% ( 4) 00:18:16.967 7.971 - 8.029: 97.9059% ( 1) 00:18:16.967 8.029 - 8.087: 97.9428% ( 4) 00:18:16.967 8.087 - 8.145: 97.9889% ( 5) 00:18:16.967 8.145 - 8.204: 98.0166% ( 3) 00:18:16.967 8.204 - 8.262: 98.0351% ( 2) 00:18:16.967 8.262 - 8.320: 98.0627% ( 3) 00:18:16.967 8.320 - 8.378: 98.0812% ( 2) 00:18:16.967 8.378 - 8.436: 98.0904% ( 1) 00:18:16.967 8.436 - 8.495: 98.1089% ( 2) 00:18:16.967 8.495 - 8.553: 98.1273% ( 2) 00:18:16.967 8.553 - 8.611: 98.1642% ( 4) 00:18:16.967 8.611 - 8.669: 98.1827% ( 2) 00:18:16.967 8.669 - 8.727: 98.2288% ( 5) 00:18:16.967 8.727 - 8.785: 98.2565% ( 3) 00:18:16.967 8.844 - 8.902: 98.3210% ( 7) 00:18:16.967 8.902 - 8.960: 98.3672% ( 5) 00:18:16.967 8.960 - 9.018: 98.3856% ( 2) 00:18:16.967 9.018 - 9.076: 98.4041% ( 2) 00:18:16.967 9.076 - 9.135: 98.4225% ( 2) 00:18:16.967 9.193 - 9.251: 98.4317% ( 1) 00:18:16.967 9.251 - 9.309: 98.4410% ( 1) 00:18:16.967 9.309 - 9.367: 98.4871% ( 5) 00:18:16.968 9.425 - 9.484: 98.5055% ( 2) 00:18:16.968 9.484 - 9.542: 98.5240% ( 2) 00:18:16.968 9.716 - 9.775: 98.5332% ( 1) 00:18:16.968 9.775 - 9.833: 98.5517% ( 2) 00:18:16.968 9.833 - 9.891: 98.5609% ( 1) 00:18:16.968 9.891 - 9.949: 98.5701% ( 1) 00:18:16.968 10.124 - 10.182: 98.5793% ( 1) 00:18:16.968 10.240 - 10.298: 98.5886% ( 1) 00:18:16.968 10.415 - 10.473: 98.5978% ( 1) 00:18:16.968 10.473 - 10.531: 98.6162% ( 2) 00:18:16.968 10.531 - 10.589: 98.6255% ( 1) 00:18:16.968 10.589 - 10.647: 98.6347% ( 1) 00:18:16.968 10.647 - 10.705: 98.6439% ( 1) 00:18:16.968 10.705 - 10.764: 98.6624% ( 2) 00:18:16.968 10.822 - 10.880: 98.6716% ( 1) 00:18:16.968 10.938 - 10.996: 98.6900% ( 2) 00:18:16.968 11.113 - 11.171: 98.6993% ( 1) 00:18:16.968 11.171 - 11.229: 98.7085% ( 1) 00:18:16.968 11.229 - 11.287: 98.7177% ( 1) 00:18:16.968 11.404 - 11.462: 98.7362% ( 2) 00:18:16.968 11.520 - 11.578: 98.7454% ( 1) 00:18:16.968 11.578 - 11.636: 98.7546% ( 1) 00:18:16.968 11.869 - 11.927: 98.7638% ( 1) 00:18:16.968 12.044 - 12.102: 98.7731% ( 1) 00:18:16.968 12.102 - 12.160: 98.8007% ( 3) 00:18:16.968 12.160 - 12.218: 98.8100% ( 1) 00:18:16.968 12.218 - 12.276: 98.8192% ( 1) 00:18:16.968 12.393 - 12.451: 98.8376% ( 2) 00:18:16.968 12.451 - 12.509: 98.8469% ( 1) 00:18:16.968 12.509 - 12.567: 98.8561% ( 1) 00:18:16.968 12.625 - 12.684: 98.8653% ( 1) 00:18:16.968 12.742 - 12.800: 98.8745% ( 1) 00:18:16.968 12.800 - 12.858: 98.8930% ( 2) 00:18:16.968 12.858 - 12.916: 98.9022% ( 1) 00:18:16.968 13.033 - 13.091: 98.9114% ( 1) 00:18:16.968 13.091 - 13.149: 98.9207% ( 1) 00:18:16.968 13.207 - 13.265: 98.9299% ( 1) 00:18:16.968 13.265 - 13.324: 98.9391% ( 1) 00:18:16.968 13.324 - 13.382: 98.9483% ( 1) 00:18:16.968 13.498 - 13.556: 98.9576% ( 1) 00:18:16.968 13.556 - 13.615: 98.9668% ( 1) 00:18:16.968 13.615 - 13.673: 98.9760% ( 1) 00:18:16.968 13.673 - 13.731: 98.9945% ( 2) 00:18:16.968 13.731 - 13.789: 99.0037% ( 1) 00:18:16.968 13.789 - 13.847: 99.0221% ( 2) 00:18:16.968 13.847 - 13.905: 99.0314% ( 1) 00:18:16.968 13.964 - 14.022: 99.0498% ( 2) 00:18:16.968 14.022 - 14.080: 99.0775% ( 3) 00:18:16.968 14.080 - 14.138: 99.0867% ( 1) 00:18:16.968 14.138 - 14.196: 99.0959% ( 1) 00:18:16.968 14.255 - 14.313: 99.1052% ( 1) 00:18:16.968 14.429 - 14.487: 99.1144% ( 1) 00:18:16.968 14.487 - 14.545: 99.1328% ( 2) 00:18:16.968 14.545 - 14.604: 99.1513% ( 2) 00:18:16.968 14.604 - 14.662: 99.1605% ( 1) 00:18:16.968 14.662 - 14.720: 99.1790% ( 2) 00:18:16.968 14.720 - 14.778: 99.1974% ( 2) 00:18:16.968 14.895 - 15.011: 99.2251% ( 3) 00:18:16.968 15.011 - 15.127: 99.2528% ( 3) 00:18:16.968 15.127 - 15.244: 99.2620% ( 1) 00:18:16.968 15.244 - 15.360: 99.2989% ( 4) 00:18:16.968 15.360 - 15.476: 99.3081% ( 1) 00:18:16.968 15.476 - 15.593: 99.3450% ( 4) 00:18:16.968 15.593 - 15.709: 99.3727% ( 3) 00:18:16.968 15.709 - 15.825: 99.4004% ( 3) 00:18:16.968 15.825 - 15.942: 99.4280% ( 3) 00:18:16.968 15.942 - 16.058: 99.4465% ( 2) 00:18:16.968 16.058 - 16.175: 99.4557% ( 1) 00:18:16.968 16.291 - 16.407: 99.4742% ( 2) 00:18:16.968 16.640 - 16.756: 99.4834% ( 1) 00:18:16.968 16.756 - 16.873: 99.4926% ( 1) 00:18:16.968 16.873 - 16.989: 99.5295% ( 4) 00:18:16.968 16.989 - 17.105: 99.5387% ( 1) 00:18:16.968 17.222 - 17.338: 99.5480% ( 1) 00:18:16.968 17.455 - 17.571: 99.5572% ( 1) 00:18:16.968 18.036 - 18.153: 99.5664% ( 1) 00:18:16.968 18.967 - 19.084: 99.5756% ( 1) 00:18:16.968 19.084 - 19.200: 99.5849% ( 1) 00:18:16.968 19.665 - 19.782: 99.5941% ( 1) 00:18:16.968 19.782 - 19.898: 99.6310% ( 4) 00:18:16.968 19.898 - 20.015: 99.6402% ( 1) 00:18:16.968 20.015 - 20.131: 99.6587% ( 2) 00:18:16.968 20.131 - 20.247: 99.6863% ( 3) 00:18:16.968 20.247 - 20.364: 99.7140% ( 3) 00:18:16.968 20.364 - 20.480: 99.7232% ( 1) 00:18:16.968 20.596 - 20.713: 99.7417% ( 2) 00:18:16.968 20.713 - 20.829: 99.7601% ( 2) 00:18:16.968 20.945 - 21.062: 99.7694% ( 1) 00:18:16.968 22.109 - 22.225: 99.7786% ( 1) 00:18:16.968 22.225 - 22.342: 99.7878% ( 1) 00:18:16.968 23.156 - 23.273: 99.7970% ( 1) 00:18:16.968 23.389 - 23.505: 99.8063% ( 1) 00:18:16.968 24.087 - 24.204: 99.8155% ( 1) 00:18:16.968 27.345 - 27.462: 99.8247% ( 1) 00:18:16.968 27.578 - 27.695: 99.8339% ( 1) 00:18:16.968 27.695 - 27.811: 99.8432% ( 1) 00:18:16.968 27.811 - 27.927: 99.8524% ( 1) 00:18:16.968 28.044 - 28.160: 99.8616% ( 1) 00:18:16.968 28.625 - 28.742: 99.8708% ( 1) 00:18:16.968 28.975 - 29.091: 99.8801% ( 1) 00:18:16.968 29.091 - 29.207: 99.8893% ( 1) 00:18:16.968 30.953 - 31.185: 99.8985% ( 1) 00:18:16.968 33.047 - 33.280: 99.9077% ( 1) 00:18:16.968 39.564 - 39.796: 99.9170% ( 1) 00:18:16.968 46.080 - 46.313: 99.9262% ( 1) 00:18:16.968 53.062 - 53.295: 99.9354% ( 1) 00:18:16.968 58.182 - 58.415: 99.9446% ( 1) 00:18:16.968 3991.738 - 4021.527: 99.9908% ( 5) 00:18:16.968 4021.527 - 4051.316: 100.0000% ( 1) 00:18:16.968 00:18:16.968 Complete histogram 00:18:16.968 ================== 00:18:16.968 Range in us Cumulative Count 00:18:16.968 2.080 - 2.095: 0.2952% ( 32) 00:18:16.968 2.095 - 2.109: 5.3782% ( 551) 00:18:16.968 2.109 - 2.124: 24.7232% ( 2097) 00:18:16.968 2.124 - 2.138: 51.2731% ( 2878) 00:18:16.968 2.138 - 2.153: 70.2306% ( 2055) 00:18:16.968 2.153 - 2.167: 77.8413% ( 825) 00:18:16.968 2.167 - 2.182: 80.8026% ( 321) 00:18:16.968 2.182 - 2.196: 82.2509% ( 157) 00:18:16.968 2.196 - 2.211: 83.3487% ( 119) 00:18:16.968 2.211 - 2.225: 84.7232% ( 149) 00:18:16.968 2.225 - 2.240: 86.5590% ( 199) 00:18:16.968 2.240 - 2.255: 88.7546% ( 238) 00:18:16.968 2.255 - 2.269: 90.5812% ( 198) 00:18:16.968 2.269 - 2.284: 91.9188% ( 145) 00:18:16.968 2.284 - 2.298: 92.7952% ( 95) 00:18:16.968 2.298 - 2.313: 93.7085% ( 99) 00:18:16.968 2.313 - 2.327: 94.4004% ( 75) 00:18:16.968 2.327 - 2.342: 94.8616% ( 50) 00:18:16.968 2.342 - 2.356: 95.4613% ( 65) 00:18:16.968 2.356 - 2.371: 95.8303% ( 40) 00:18:16.968 2.371 - 2.385: 96.1439% ( 34) 00:18:16.968 2.385 - 2.400: 96.3838% ( 26) 00:18:16.968 2.400 - 2.415: 96.6513% ( 29) 00:18:16.968 2.415 - 2.429: 96.7897% ( 15) 00:18:16.968 2.429 - 2.444: 96.9649% ( 19) 00:18:16.968 2.444 - 2.458: 97.0756% ( 12) 00:18:16.968 2.458 - 2.473: 97.2048% ( 14) 00:18:16.968 2.473 - 2.487: 97.3155% ( 12) 00:18:16.968 2.487 - 2.502: 97.4077% ( 10) 00:18:16.968 2.502 - 2.516: 97.4631% ( 6) 00:18:16.969 2.516 - 2.531: 97.5369% ( 8) 00:18:16.969 2.531 - 2.545: 97.6015% ( 7) 00:18:16.969 2.545 - 2.560: 97.6384% ( 4) 00:18:16.969 2.560 - 2.575: 97.6568% ( 2) 00:18:16.969 2.575 - 2.589: 97.6661% ( 1) 00:18:16.969 2.589 - 2.604: 97.6937% ( 3) 00:18:16.969 2.604 - 2.618: 97.7030% ( 1) 00:18:16.969 2.618 - 2.633: 97.7122% ( 1) 00:18:16.969 2.633 - 2.647: 97.7214% ( 1) 00:18:16.969 2.647 - 2.662: 97.7399% ( 2) 00:18:16.969 2.691 - 2.705: 97.7491% ( 1) 00:18:16.969 2.720 - 2.735: 97.7583% ( 1) 00:18:16.969 2.735 - 2.749: 97.7768% ( 2) 00:18:16.969 2.749 - 2.764: 97.8044% ( 3) 00:18:16.969 2.764 - 2.778: 97.8137% ( 1) 00:18:16.969 2.807 - 2.822: 97.8229% ( 1) 00:18:16.969 2.836 - 2.851: 97.8321% ( 1) 00:18:16.969 2.880 - 2.895: 97.8413% ( 1) 00:18:16.969 2.909 - 2.924: 97.8598% ( 2) 00:18:16.969 2.967 - 2.982: 97.8690% ( 1) 00:18:16.969 3.055 - 3.069: 97.8782% ( 1) 00:18:16.969 3.113 - 3.127: 97.8875% ( 1) 00:18:16.969 3.142 - 3.156: 97.8967% ( 1) 00:18:16.969 4.189 - 4.218: 97.9059% ( 1) 00:18:16.969 4.247 - 4.276: 97.9151% ( 1) 00:18:16.969 4.305 - 4.335: 97.9336% ( 2) 00:18:16.969 4.335 - 4.364: 97.9428% ( 1) 00:18:16.969 4.393 - 4.422: 97.9613% ( 2) 00:18:16.969 4.422 - 4.451: 97.9705% ( 1) 00:18:16.969 4.538 - 4.567: 97.9982% ( 3) 00:18:16.969 4.596 - 4.625: 98.0074% ( 1) 00:18:16.969 4.655 - 4.684: 98.0166% ( 1) 00:18:16.969 4.684 - 4.713: 98.0258% ( 1) 00:18:16.969 4.742 - 4.771: 98.0443% ( 2) 00:18:16.969 4.771 - 4.800: 98.0535% ( 1) 00:18:16.969 4.858 - 4.887: 98.0720% ( 2) 00:18:16.969 5.527 - 5.556: 98.0812% ( 1) 00:18:16.969 5.789 - 5.818: 98.0904% ( 1) 00:18:16.969 6.051 - 6.080: 98.0996% ( 1) 00:18:16.969 6.458 - 6.487: 98.1089% ( 1) 00:18:16.969 6.575 - 6.604: 98.1181% ( 1) 00:18:16.969 6.633 - 6.662: 98.1273% ( 1) 00:18:16.969 6.662 - 6.691: 98.1365% ( 1) 00:18:16.969 6.836 - 6.865: 98.1550% ( 2) 00:18:16.969 6.865 - 6.895: 98.1642% ( 1) 00:18:16.969 6.982 - 7.011: 98.1734% ( 1) 00:18:16.969 8.436 - 8.495: 98.1827% ( 1) 00:18:16.969 8.960 - 9.018: 98.1919% ( 1) 00:18:16.969 9.018 - 9.076: 98.2011% ( 1) 00:18:16.969 9.367 - 9.425: 98.2103% ( 1) 00:18:16.969 9.484 - 9.542: 98.2196% ( 1) 00:18:16.969 9.542 - 9.600: 98.2380% ( 2) 00:18:16.969 9.833 - 9.891: 98.2472% ( 1) 00:18:16.969 9.949 - 10.007: 98.2565% ( 1) 00:18:16.969 10.007 - 10.065: 98.2657% ( 1) 00:18:16.969 10.124 - 10.182: 98.3026% ( 4) 00:18:16.969 10.182 - 10.240: 98.3210% ( 2) 00:18:16.969 10.356 - 10.415: 98.3303% ( 1) 00:18:16.969 10.415 - 10.473: 98.3487% ( 2) 00:18:16.969 10.473 - 10.531: 98.3672% ( 2) 00:18:16.969 10.589 - 10.647: 98.3764% ( 1) 00:18:16.969 10.647 - 10.705: 98.4041% ( 3) 00:18:16.969 10.705 - 10.764: 98.4225% ( 2) 00:18:16.969 10.764 - 10.822: 98.4317% ( 1) 00:18:16.969 10.822 - 10.880: 98.4502% ( 2) 00:18:16.969 10.880 - 10.938: 98.4779% ( 3) 00:18:16.969 11.113 - 11.171: 98.4871% ( 1) 00:18:16.969 11.695 - 11.753: 98.4963% ( 1) 00:18:16.969 11.869 - 11.927: 98.5055% ( 1) 00:18:16.969 12.044 - 12.102: 98.5148% ( 1) 00:18:16.969 12.393 - 12.451: 98.5240% ( 1) 00:18:16.969 12.858 - 12.916: 98.5332% ( 1) 00:18:16.969 13.207 - 13.265: 98.5424% ( 1) 00:18:16.969 13.382 - 13.440: 98.5517% ( 1) 00:18:16.969 13.556 - 13.615: 98.5609% ( 1) 00:18:16.969 13.673 - 13.731: 98.5701%[2024-12-05 09:50:04.301082] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:16.969 ( 1) 00:18:16.969 13.789 - 13.847: 98.5793% ( 1) 00:18:16.969 13.847 - 13.905: 98.5886% ( 1) 00:18:16.969 13.905 - 13.964: 98.6070% ( 2) 00:18:16.969 14.022 - 14.080: 98.6255% ( 2) 00:18:16.969 14.080 - 14.138: 98.6439% ( 2) 00:18:16.969 14.138 - 14.196: 98.6531% ( 1) 00:18:16.969 14.371 - 14.429: 98.6624% ( 1) 00:18:16.969 14.429 - 14.487: 98.6716% ( 1) 00:18:16.969 14.720 - 14.778: 98.6808% ( 1) 00:18:16.969 14.895 - 15.011: 98.6993% ( 2) 00:18:16.969 15.127 - 15.244: 98.7085% ( 1) 00:18:16.969 15.360 - 15.476: 98.7177% ( 1) 00:18:16.969 15.476 - 15.593: 98.7362% ( 2) 00:18:16.969 15.593 - 15.709: 98.7454% ( 1) 00:18:16.969 15.709 - 15.825: 98.7546% ( 1) 00:18:16.969 15.825 - 15.942: 98.7638% ( 1) 00:18:16.969 16.291 - 16.407: 98.7731% ( 1) 00:18:16.969 16.524 - 16.640: 98.7915% ( 2) 00:18:16.969 16.756 - 16.873: 98.8007% ( 1) 00:18:16.969 17.571 - 17.687: 98.8100% ( 1) 00:18:16.969 17.687 - 17.804: 98.8192% ( 1) 00:18:16.969 17.804 - 17.920: 98.8561% ( 4) 00:18:16.969 17.920 - 18.036: 98.8930% ( 4) 00:18:16.969 18.036 - 18.153: 98.9114% ( 2) 00:18:16.969 18.153 - 18.269: 99.0037% ( 10) 00:18:16.969 18.269 - 18.385: 99.0683% ( 7) 00:18:16.969 18.385 - 18.502: 99.0959% ( 3) 00:18:16.969 18.502 - 18.618: 99.1236% ( 3) 00:18:16.969 20.713 - 20.829: 99.1421% ( 2) 00:18:16.969 21.178 - 21.295: 99.1513% ( 1) 00:18:16.969 21.760 - 21.876: 99.1605% ( 1) 00:18:16.969 22.225 - 22.342: 99.1697% ( 1) 00:18:16.969 23.505 - 23.622: 99.1790% ( 1) 00:18:16.969 27.578 - 27.695: 99.1882% ( 1) 00:18:16.969 29.789 - 30.022: 99.1974% ( 1) 00:18:16.969 30.022 - 30.255: 99.2066% ( 1) 00:18:16.969 31.185 - 31.418: 99.2159% ( 1) 00:18:16.969 60.509 - 60.975: 99.2251% ( 1) 00:18:16.969 156.393 - 157.324: 99.2343% ( 1) 00:18:16.969 1228.800 - 1236.247: 99.2435% ( 1) 00:18:16.969 2025.658 - 2040.553: 99.2528% ( 1) 00:18:16.969 3038.487 - 3053.382: 99.2620% ( 1) 00:18:16.969 3902.371 - 3932.160: 99.2712% ( 1) 00:18:16.969 3961.949 - 3991.738: 99.3173% ( 5) 00:18:16.969 3991.738 - 4021.527: 99.8432% ( 57) 00:18:16.969 4021.527 - 4051.316: 99.9815% ( 15) 00:18:16.969 5987.607 - 6017.396: 99.9908% ( 1) 00:18:16.969 6047.185 - 6076.975: 100.0000% ( 1) 00:18:16.969 00:18:16.969 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:16.969 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:16.970 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:16.970 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:16.970 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:17.228 [ 00:18:17.228 { 00:18:17.228 "allow_any_host": true, 00:18:17.228 "hosts": [], 00:18:17.228 "listen_addresses": [], 00:18:17.228 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:17.228 "subtype": "Discovery" 00:18:17.228 }, 00:18:17.228 { 00:18:17.228 "allow_any_host": true, 00:18:17.228 "hosts": [], 00:18:17.228 "listen_addresses": [ 00:18:17.228 { 00:18:17.228 "adrfam": "IPv4", 00:18:17.228 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:17.228 "trsvcid": "0", 00:18:17.228 "trtype": "VFIOUSER" 00:18:17.228 } 00:18:17.228 ], 00:18:17.228 "max_cntlid": 65519, 00:18:17.228 "max_namespaces": 32, 00:18:17.228 "min_cntlid": 1, 00:18:17.228 "model_number": "SPDK bdev Controller", 00:18:17.228 "namespaces": [ 00:18:17.228 { 00:18:17.228 "bdev_name": "Malloc1", 00:18:17.228 "name": "Malloc1", 00:18:17.228 "nguid": "3CC96ACCFB0C4D6786159BB96373816A", 00:18:17.228 "nsid": 1, 00:18:17.228 "uuid": "3cc96acc-fb0c-4d67-8615-9bb96373816a" 00:18:17.228 }, 00:18:17.228 { 00:18:17.228 "bdev_name": "Malloc3", 00:18:17.228 "name": "Malloc3", 00:18:17.228 "nguid": "1AFE7D72651043BFB5668D1959C6F591", 00:18:17.228 "nsid": 2, 00:18:17.228 "uuid": "1afe7d72-6510-43bf-b566-8d1959c6f591" 00:18:17.228 } 00:18:17.228 ], 00:18:17.228 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:17.228 "serial_number": "SPDK1", 00:18:17.228 "subtype": "NVMe" 00:18:17.228 }, 00:18:17.228 { 00:18:17.228 "allow_any_host": true, 00:18:17.228 "hosts": [], 00:18:17.228 "listen_addresses": [ 00:18:17.228 { 00:18:17.229 "adrfam": "IPv4", 00:18:17.229 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:17.229 "trsvcid": "0", 00:18:17.229 "trtype": "VFIOUSER" 00:18:17.229 } 00:18:17.229 ], 00:18:17.229 "max_cntlid": 65519, 00:18:17.229 "max_namespaces": 32, 00:18:17.229 "min_cntlid": 1, 00:18:17.229 "model_number": "SPDK bdev Controller", 00:18:17.229 "namespaces": [ 00:18:17.229 { 00:18:17.229 "bdev_name": "Malloc2", 00:18:17.229 "name": "Malloc2", 00:18:17.229 "nguid": "E264DCFC89BB4B37AA7EBECB434308A9", 00:18:17.229 "nsid": 1, 00:18:17.229 "uuid": "e264dcfc-89bb-4b37-aa7e-becb434308a9" 00:18:17.229 } 00:18:17.229 ], 00:18:17.229 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:17.229 "serial_number": "SPDK2", 00:18:17.229 "subtype": "NVMe" 00:18:17.229 } 00:18:17.229 ] 00:18:17.229 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:17.229 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=82360 00:18:17.229 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:17.229 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:17.229 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:17.229 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:17.229 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:17.229 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:18:17.229 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:17.229 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:17.229 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:17.229 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:18:17.229 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:17.487 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:17.487 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:18:17.487 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=3 00:18:17.487 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:17.487 [2024-12-05 09:50:05.010584] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:17.487 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:17.487 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:17.487 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:17.487 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:17.487 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:18.053 Malloc4 00:18:18.053 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:18.311 [2024-12-05 09:50:05.684817] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:18.311 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:18.311 Asynchronous Event Request test 00:18:18.311 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:18.311 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:18.311 Registering asynchronous event callbacks... 00:18:18.311 Starting namespace attribute notice tests for all controllers... 00:18:18.311 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:18.311 aer_cb - Changed Namespace 00:18:18.311 Cleaning up... 00:18:18.569 [ 00:18:18.569 { 00:18:18.569 "allow_any_host": true, 00:18:18.569 "hosts": [], 00:18:18.569 "listen_addresses": [], 00:18:18.569 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:18.569 "subtype": "Discovery" 00:18:18.569 }, 00:18:18.569 { 00:18:18.569 "allow_any_host": true, 00:18:18.569 "hosts": [], 00:18:18.569 "listen_addresses": [ 00:18:18.569 { 00:18:18.569 "adrfam": "IPv4", 00:18:18.569 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:18.569 "trsvcid": "0", 00:18:18.569 "trtype": "VFIOUSER" 00:18:18.569 } 00:18:18.569 ], 00:18:18.569 "max_cntlid": 65519, 00:18:18.569 "max_namespaces": 32, 00:18:18.569 "min_cntlid": 1, 00:18:18.569 "model_number": "SPDK bdev Controller", 00:18:18.569 "namespaces": [ 00:18:18.569 { 00:18:18.569 "bdev_name": "Malloc1", 00:18:18.569 "name": "Malloc1", 00:18:18.569 "nguid": "3CC96ACCFB0C4D6786159BB96373816A", 00:18:18.569 "nsid": 1, 00:18:18.569 "uuid": "3cc96acc-fb0c-4d67-8615-9bb96373816a" 00:18:18.569 }, 00:18:18.569 { 00:18:18.569 "bdev_name": "Malloc3", 00:18:18.569 "name": "Malloc3", 00:18:18.569 "nguid": "1AFE7D72651043BFB5668D1959C6F591", 00:18:18.569 "nsid": 2, 00:18:18.569 "uuid": "1afe7d72-6510-43bf-b566-8d1959c6f591" 00:18:18.569 } 00:18:18.569 ], 00:18:18.569 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:18.569 "serial_number": "SPDK1", 00:18:18.569 "subtype": "NVMe" 00:18:18.569 }, 00:18:18.569 { 00:18:18.569 "allow_any_host": true, 00:18:18.569 "hosts": [], 00:18:18.569 "listen_addresses": [ 00:18:18.569 { 00:18:18.569 "adrfam": "IPv4", 00:18:18.569 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:18.569 "trsvcid": "0", 00:18:18.569 "trtype": "VFIOUSER" 00:18:18.569 } 00:18:18.569 ], 00:18:18.569 "max_cntlid": 65519, 00:18:18.569 "max_namespaces": 32, 00:18:18.569 "min_cntlid": 1, 00:18:18.569 "model_number": "SPDK bdev Controller", 00:18:18.569 "namespaces": [ 00:18:18.569 { 00:18:18.569 "bdev_name": "Malloc2", 00:18:18.569 "name": "Malloc2", 00:18:18.569 "nguid": "E264DCFC89BB4B37AA7EBECB434308A9", 00:18:18.569 "nsid": 1, 00:18:18.569 "uuid": "e264dcfc-89bb-4b37-aa7e-becb434308a9" 00:18:18.569 }, 00:18:18.569 { 00:18:18.569 "bdev_name": "Malloc4", 00:18:18.569 "name": "Malloc4", 00:18:18.569 "nguid": "9C98348588B84ABA9E802FFE6A872EF3", 00:18:18.569 "nsid": 2, 00:18:18.569 "uuid": "9c983485-88b8-4aba-9e80-2ffe6a872ef3" 00:18:18.569 } 00:18:18.569 ], 00:18:18.569 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:18.569 "serial_number": "SPDK2", 00:18:18.569 "subtype": "NVMe" 00:18:18.569 } 00:18:18.569 ] 00:18:18.569 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 82360 00:18:18.569 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:18.569 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 81646 00:18:18.569 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 81646 ']' 00:18:18.570 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 81646 00:18:18.570 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:18.570 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.570 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81646 00:18:18.570 killing process with pid 81646 00:18:18.570 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:18.570 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:18.570 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81646' 00:18:18.570 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 81646 00:18:18.570 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 81646 00:18:19.955 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:19.955 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:19.955 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:19.955 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:19.955 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:19.955 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=82423 00:18:19.955 Process pid: 82423 00:18:19.955 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:19.955 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 82423' 00:18:19.955 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:19.955 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 82423 00:18:19.955 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 82423 ']' 00:18:19.955 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.955 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.955 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.956 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.956 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:20.214 [2024-12-05 09:50:07.682742] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:20.214 [2024-12-05 09:50:07.685894] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:18:20.214 [2024-12-05 09:50:07.686053] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.473 [2024-12-05 09:50:07.869559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:20.473 [2024-12-05 09:50:07.980626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.473 [2024-12-05 09:50:07.980708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.473 [2024-12-05 09:50:07.980723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.473 [2024-12-05 09:50:07.980736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.473 [2024-12-05 09:50:07.980747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.473 [2024-12-05 09:50:07.982780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.473 [2024-12-05 09:50:07.983002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.473 [2024-12-05 09:50:07.983081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.473 [2024-12-05 09:50:07.983425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:20.732 [2024-12-05 09:50:08.271475] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:20.732 [2024-12-05 09:50:08.272761] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:20.732 [2024-12-05 09:50:08.274152] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:20.732 [2024-12-05 09:50:08.274552] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:20.732 [2024-12-05 09:50:08.274901] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:21.299 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.299 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:21.299 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:22.235 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:22.235 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:22.235 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:22.235 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:22.235 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:22.235 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:22.800 Malloc1 00:18:22.800 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:23.059 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:23.317 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:23.575 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:23.575 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:23.575 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:23.834 Malloc2 00:18:23.834 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:24.402 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:24.402 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:24.661 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:24.661 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 82423 00:18:24.661 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 82423 ']' 00:18:24.661 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 82423 00:18:24.661 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:24.661 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.661 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82423 00:18:24.661 killing process with pid 82423 00:18:24.661 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.661 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.661 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82423' 00:18:24.661 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 82423 00:18:24.661 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 82423 00:18:26.038 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:26.038 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:26.038 00:18:26.038 real 1m0.634s 00:18:26.038 user 3m47.918s 00:18:26.038 sys 0m4.807s 00:18:26.038 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.038 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:26.038 ************************************ 00:18:26.038 END TEST nvmf_vfio_user 00:18:26.038 ************************************ 00:18:26.038 09:50:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:26.038 09:50:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:26.038 09:50:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.038 09:50:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:26.038 ************************************ 00:18:26.038 START TEST nvmf_vfio_user_nvme_compliance 00:18:26.038 ************************************ 00:18:26.038 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:26.298 * Looking for test storage... 00:18:26.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:26.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.298 --rc genhtml_branch_coverage=1 00:18:26.298 --rc genhtml_function_coverage=1 00:18:26.298 --rc genhtml_legend=1 00:18:26.298 --rc geninfo_all_blocks=1 00:18:26.298 --rc geninfo_unexecuted_blocks=1 00:18:26.298 00:18:26.298 ' 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:26.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.298 --rc genhtml_branch_coverage=1 00:18:26.298 --rc genhtml_function_coverage=1 00:18:26.298 --rc genhtml_legend=1 00:18:26.298 --rc geninfo_all_blocks=1 00:18:26.298 --rc geninfo_unexecuted_blocks=1 00:18:26.298 00:18:26.298 ' 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:26.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.298 --rc genhtml_branch_coverage=1 00:18:26.298 --rc genhtml_function_coverage=1 00:18:26.298 --rc genhtml_legend=1 00:18:26.298 --rc geninfo_all_blocks=1 00:18:26.298 --rc geninfo_unexecuted_blocks=1 00:18:26.298 00:18:26.298 ' 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:26.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.298 --rc genhtml_branch_coverage=1 00:18:26.298 --rc genhtml_function_coverage=1 00:18:26.298 --rc genhtml_legend=1 00:18:26.298 --rc geninfo_all_blocks=1 00:18:26.298 --rc geninfo_unexecuted_blocks=1 00:18:26.298 00:18:26.298 ' 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.298 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:26.299 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=82638 00:18:26.299 Process pid: 82638 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 82638' 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 82638 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 82638 ']' 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.299 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:26.557 [2024-12-05 09:50:13.938009] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:18:26.557 [2024-12-05 09:50:13.938198] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.558 [2024-12-05 09:50:14.120452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:26.816 [2024-12-05 09:50:14.251538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.816 [2024-12-05 09:50:14.251614] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.816 [2024-12-05 09:50:14.251642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.816 [2024-12-05 09:50:14.251659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.816 [2024-12-05 09:50:14.251682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.817 [2024-12-05 09:50:14.254725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.817 [2024-12-05 09:50:14.254927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.817 [2024-12-05 09:50:14.254967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.384 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.384 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:18:27.384 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:28.326 malloc0 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.326 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:28.583 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.583 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:28.583 00:18:28.583 00:18:28.583 CUnit - A unit testing framework for C - Version 2.1-3 00:18:28.583 http://cunit.sourceforge.net/ 00:18:28.583 00:18:28.583 00:18:28.583 Suite: nvme_compliance 00:18:28.841 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-05 09:50:16.255687] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:28.841 [2024-12-05 09:50:16.257581] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:28.841 [2024-12-05 09:50:16.257631] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:28.841 [2024-12-05 09:50:16.257648] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:28.841 [2024-12-05 09:50:16.261738] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:28.841 passed 00:18:28.841 Test: admin_identify_ctrlr_verify_fused ...[2024-12-05 09:50:16.367710] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:28.841 [2024-12-05 09:50:16.373773] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:28.841 passed 00:18:29.099 Test: admin_identify_ns ...[2024-12-05 09:50:16.482080] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:29.099 [2024-12-05 09:50:16.542978] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:29.099 [2024-12-05 09:50:16.550963] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:29.099 [2024-12-05 09:50:16.572120] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:29.099 passed 00:18:29.099 Test: admin_get_features_mandatory_features ...[2024-12-05 09:50:16.679524] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:29.099 [2024-12-05 09:50:16.682551] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:29.357 passed 00:18:29.357 Test: admin_get_features_optional_features ...[2024-12-05 09:50:16.792590] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:29.357 [2024-12-05 09:50:16.795625] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:29.357 passed 00:18:29.357 Test: admin_set_features_number_of_queues ...[2024-12-05 09:50:16.901534] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:29.615 [2024-12-05 09:50:17.007628] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:29.615 passed 00:18:29.615 Test: admin_get_log_page_mandatory_logs ...[2024-12-05 09:50:17.113706] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:29.616 [2024-12-05 09:50:17.116752] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:29.616 passed 00:18:29.616 Test: admin_get_log_page_with_lpo ...[2024-12-05 09:50:17.222303] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:29.874 [2024-12-05 09:50:17.290970] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:29.874 [2024-12-05 09:50:17.304046] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:29.874 passed 00:18:29.874 Test: fabric_property_get ...[2024-12-05 09:50:17.410962] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:29.874 [2024-12-05 09:50:17.412416] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:29.874 [2024-12-05 09:50:17.413996] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:29.874 passed 00:18:30.132 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-05 09:50:17.524982] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:30.132 [2024-12-05 09:50:17.526460] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:30.132 [2024-12-05 09:50:17.528028] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:30.132 passed 00:18:30.132 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-05 09:50:17.634754] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:30.132 [2024-12-05 09:50:17.719950] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:30.133 [2024-12-05 09:50:17.735970] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:30.133 [2024-12-05 09:50:17.741619] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:30.391 passed 00:18:30.391 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-05 09:50:17.848905] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:30.391 [2024-12-05 09:50:17.850380] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:30.391 [2024-12-05 09:50:17.851951] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:30.391 passed 00:18:30.391 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-05 09:50:17.954831] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:30.651 [2024-12-05 09:50:18.028946] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:30.651 [2024-12-05 09:50:18.052962] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:30.651 [2024-12-05 09:50:18.058589] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:30.651 passed 00:18:30.651 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-05 09:50:18.166733] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:30.651 [2024-12-05 09:50:18.168213] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:30.651 [2024-12-05 09:50:18.168444] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:30.651 [2024-12-05 09:50:18.172803] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:30.651 passed 00:18:30.651 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-05 09:50:18.276769] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:30.910 [2024-12-05 09:50:18.366947] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:30.910 [2024-12-05 09:50:18.374959] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:30.910 [2024-12-05 09:50:18.382956] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:30.910 [2024-12-05 09:50:18.390961] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:30.910 [2024-12-05 09:50:18.420592] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:30.910 passed 00:18:30.910 Test: admin_create_io_sq_verify_pc ...[2024-12-05 09:50:18.530449] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:31.170 [2024-12-05 09:50:18.547013] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:31.170 [2024-12-05 09:50:18.572581] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:31.170 passed 00:18:31.170 Test: admin_create_io_qp_max_qps ...[2024-12-05 09:50:18.677517] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:32.547 [2024-12-05 09:50:19.778955] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:18:32.805 [2024-12-05 09:50:20.204573] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:32.805 passed 00:18:32.805 Test: admin_create_io_sq_shared_cq ...[2024-12-05 09:50:20.311783] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:33.064 [2024-12-05 09:50:20.442977] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:33.064 [2024-12-05 09:50:20.480070] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:33.064 passed 00:18:33.064 00:18:33.064 Run Summary: Type Total Ran Passed Failed Inactive 00:18:33.064 suites 1 1 n/a 0 0 00:18:33.064 tests 18 18 18 0 0 00:18:33.064 asserts 360 360 360 0 n/a 00:18:33.064 00:18:33.064 Elapsed time = 1.799 seconds 00:18:33.064 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 82638 00:18:33.064 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 82638 ']' 00:18:33.064 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 82638 00:18:33.064 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:18:33.064 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.064 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82638 00:18:33.064 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.064 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.064 killing process with pid 82638 00:18:33.064 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82638' 00:18:33.064 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 82638 00:18:33.064 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 82638 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:34.442 00:18:34.442 real 0m8.155s 00:18:34.442 user 0m22.231s 00:18:34.442 sys 0m0.794s 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.442 ************************************ 00:18:34.442 END TEST nvmf_vfio_user_nvme_compliance 00:18:34.442 ************************************ 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:34.442 ************************************ 00:18:34.442 START TEST nvmf_vfio_user_fuzz 00:18:34.442 ************************************ 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:34.442 * Looking for test storage... 00:18:34.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:34.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.442 --rc genhtml_branch_coverage=1 00:18:34.442 --rc genhtml_function_coverage=1 00:18:34.442 --rc genhtml_legend=1 00:18:34.442 --rc geninfo_all_blocks=1 00:18:34.442 --rc geninfo_unexecuted_blocks=1 00:18:34.442 00:18:34.442 ' 00:18:34.442 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:34.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.442 --rc genhtml_branch_coverage=1 00:18:34.442 --rc genhtml_function_coverage=1 00:18:34.442 --rc genhtml_legend=1 00:18:34.442 --rc geninfo_all_blocks=1 00:18:34.442 --rc geninfo_unexecuted_blocks=1 00:18:34.442 00:18:34.442 ' 00:18:34.443 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:34.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.443 --rc genhtml_branch_coverage=1 00:18:34.443 --rc genhtml_function_coverage=1 00:18:34.443 --rc genhtml_legend=1 00:18:34.443 --rc geninfo_all_blocks=1 00:18:34.443 --rc geninfo_unexecuted_blocks=1 00:18:34.443 00:18:34.443 ' 00:18:34.443 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:34.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.443 --rc genhtml_branch_coverage=1 00:18:34.443 --rc genhtml_function_coverage=1 00:18:34.443 --rc genhtml_legend=1 00:18:34.443 --rc geninfo_all_blocks=1 00:18:34.443 --rc geninfo_unexecuted_blocks=1 00:18:34.443 00:18:34.443 ' 00:18:34.443 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:34.443 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:34.443 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.443 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.443 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.443 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.443 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.443 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.443 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.443 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.443 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.443 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:34.443 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:34.443 Process pid: 82812 00:18:34.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=82812 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 82812' 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 82812 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 82812 ']' 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.443 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:35.819 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.819 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:35.819 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:36.756 malloc0 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:36.756 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:37.325 Shutting down the fuzz application 00:18:37.325 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:37.325 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.325 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:37.325 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.325 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 82812 00:18:37.325 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 82812 ']' 00:18:37.325 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 82812 00:18:37.325 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:18:37.325 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.325 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82812 00:18:37.325 killing process with pid 82812 00:18:37.325 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:37.325 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:37.325 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82812' 00:18:37.325 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 82812 00:18:37.325 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 82812 00:18:38.700 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:38.700 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:38.700 00:18:38.700 real 0m4.152s 00:18:38.700 user 0m4.411s 00:18:38.700 sys 0m0.680s 00:18:38.700 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.700 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:38.700 ************************************ 00:18:38.700 END TEST nvmf_vfio_user_fuzz 00:18:38.700 ************************************ 00:18:38.700 09:50:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:38.700 09:50:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:38.700 09:50:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.700 09:50:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:38.700 ************************************ 00:18:38.700 START TEST nvmf_auth_target 00:18:38.700 ************************************ 00:18:38.700 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:38.700 * Looking for test storage... 00:18:38.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:38.700 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:38.700 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:18:38.700 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:38.700 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:38.700 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:38.700 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:38.700 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:38.700 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:38.700 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:38.700 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:38.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.701 --rc genhtml_branch_coverage=1 00:18:38.701 --rc genhtml_function_coverage=1 00:18:38.701 --rc genhtml_legend=1 00:18:38.701 --rc geninfo_all_blocks=1 00:18:38.701 --rc geninfo_unexecuted_blocks=1 00:18:38.701 00:18:38.701 ' 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:38.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.701 --rc genhtml_branch_coverage=1 00:18:38.701 --rc genhtml_function_coverage=1 00:18:38.701 --rc genhtml_legend=1 00:18:38.701 --rc geninfo_all_blocks=1 00:18:38.701 --rc geninfo_unexecuted_blocks=1 00:18:38.701 00:18:38.701 ' 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:38.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.701 --rc genhtml_branch_coverage=1 00:18:38.701 --rc genhtml_function_coverage=1 00:18:38.701 --rc genhtml_legend=1 00:18:38.701 --rc geninfo_all_blocks=1 00:18:38.701 --rc geninfo_unexecuted_blocks=1 00:18:38.701 00:18:38.701 ' 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:38.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.701 --rc genhtml_branch_coverage=1 00:18:38.701 --rc genhtml_function_coverage=1 00:18:38.701 --rc genhtml_legend=1 00:18:38.701 --rc geninfo_all_blocks=1 00:18:38.701 --rc geninfo_unexecuted_blocks=1 00:18:38.701 00:18:38.701 ' 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:38.701 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:38.701 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:38.702 Cannot find device "nvmf_init_br" 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:38.702 Cannot find device "nvmf_init_br2" 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:38.702 Cannot find device "nvmf_tgt_br" 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:38.702 Cannot find device "nvmf_tgt_br2" 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:38.702 Cannot find device "nvmf_init_br" 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:38.702 Cannot find device "nvmf_init_br2" 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:38.702 Cannot find device "nvmf_tgt_br" 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:38.702 Cannot find device "nvmf_tgt_br2" 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:18:38.702 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:38.961 Cannot find device "nvmf_br" 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:38.961 Cannot find device "nvmf_init_if" 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:38.961 Cannot find device "nvmf_init_if2" 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:38.961 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:38.961 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:38.961 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:39.219 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:39.219 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:39.219 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:39.219 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:39.220 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:39.220 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:18:39.220 00:18:39.220 --- 10.0.0.3 ping statistics --- 00:18:39.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.220 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:39.220 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:39.220 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:18:39.220 00:18:39.220 --- 10.0.0.4 ping statistics --- 00:18:39.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.220 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:39.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:39.220 00:18:39.220 --- 10.0.0.1 ping statistics --- 00:18:39.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.220 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:39.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:18:39.220 00:18:39.220 --- 10.0.0.2 ping statistics --- 00:18:39.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.220 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=83081 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 83081 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 83081 ']' 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.220 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=83125 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e57102faa7a08d0d2b8ef8597f7067c3d80a610f82682fda 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Uym 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e57102faa7a08d0d2b8ef8597f7067c3d80a610f82682fda 0 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e57102faa7a08d0d2b8ef8597f7067c3d80a610f82682fda 0 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e57102faa7a08d0d2b8ef8597f7067c3d80a610f82682fda 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:18:40.155 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Uym 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Uym 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Uym 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d07a4297085b6ea333e532339ecc8e8a962640dee90d822306b6dd0cd2afa292 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.3zW 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d07a4297085b6ea333e532339ecc8e8a962640dee90d822306b6dd0cd2afa292 3 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d07a4297085b6ea333e532339ecc8e8a962640dee90d822306b6dd0cd2afa292 3 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d07a4297085b6ea333e532339ecc8e8a962640dee90d822306b6dd0cd2afa292 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.3zW 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.3zW 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.3zW 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=32d62037377a87c9ae97f3b03680452b 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.e4S 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 32d62037377a87c9ae97f3b03680452b 1 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 32d62037377a87c9ae97f3b03680452b 1 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=32d62037377a87c9ae97f3b03680452b 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.e4S 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.e4S 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.e4S 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1a058c92de41f455c335e43d802cdfa9e3adb885efca93eb 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lxm 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1a058c92de41f455c335e43d802cdfa9e3adb885efca93eb 2 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1a058c92de41f455c335e43d802cdfa9e3adb885efca93eb 2 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1a058c92de41f455c335e43d802cdfa9e3adb885efca93eb 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:40.414 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:40.414 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lxm 00:18:40.414 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lxm 00:18:40.414 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.lxm 00:18:40.414 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:40.414 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:40.414 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.414 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:40.414 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:40.414 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:40.414 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:40.414 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2f5db5d9fa0ee6d2f2f62153d385509d5ce5943391c67caf 00:18:40.414 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:40.414 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.D79 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2f5db5d9fa0ee6d2f2f62153d385509d5ce5943391c67caf 2 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2f5db5d9fa0ee6d2f2f62153d385509d5ce5943391c67caf 2 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2f5db5d9fa0ee6d2f2f62153d385509d5ce5943391c67caf 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.D79 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.D79 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.D79 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=04d9c0068fe7dc8992ebf29c220bd050 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.rr9 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 04d9c0068fe7dc8992ebf29c220bd050 1 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 04d9c0068fe7dc8992ebf29c220bd050 1 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=04d9c0068fe7dc8992ebf29c220bd050 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.rr9 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.rr9 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.rr9 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=928281b160e179a6686379e2384aacbb8cb1e2ba082924070eddb0206df62886 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.St9 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 928281b160e179a6686379e2384aacbb8cb1e2ba082924070eddb0206df62886 3 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 928281b160e179a6686379e2384aacbb8cb1e2ba082924070eddb0206df62886 3 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=928281b160e179a6686379e2384aacbb8cb1e2ba082924070eddb0206df62886 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.St9 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.St9 00:18:40.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.St9 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 83081 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 83081 ']' 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.672 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:41.237 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.237 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:41.237 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 83125 /var/tmp/host.sock 00:18:41.237 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 83125 ']' 00:18:41.237 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:41.237 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.237 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:41.237 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.237 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.498 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.498 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:41.498 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:41.498 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.498 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.498 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.498 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:41.498 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Uym 00:18:41.498 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.498 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.498 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.498 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Uym 00:18:41.498 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Uym 00:18:41.783 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.3zW ]] 00:18:41.783 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3zW 00:18:41.783 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.783 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.783 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.783 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3zW 00:18:41.783 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3zW 00:18:42.067 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:42.067 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.e4S 00:18:42.067 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.067 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.067 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.067 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.e4S 00:18:42.067 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.e4S 00:18:42.338 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.lxm ]] 00:18:42.338 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lxm 00:18:42.338 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.338 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.338 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.338 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lxm 00:18:42.338 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lxm 00:18:42.596 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:42.596 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.D79 00:18:42.596 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.596 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.596 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.596 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.D79 00:18:42.596 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.D79 00:18:42.854 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.rr9 ]] 00:18:42.854 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rr9 00:18:42.854 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.854 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.854 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.854 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rr9 00:18:42.854 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rr9 00:18:43.114 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:43.114 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.St9 00:18:43.114 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.114 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.114 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.114 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.St9 00:18:43.114 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.St9 00:18:43.373 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:43.373 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:43.373 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.373 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.373 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:43.373 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:43.632 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:43.632 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.632 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:43.632 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:43.632 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:43.632 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.632 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.632 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.632 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.632 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.632 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.632 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.632 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.890 00:18:43.890 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.890 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.890 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.147 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.147 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.147 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.147 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.147 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.147 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.147 { 00:18:44.147 "auth": { 00:18:44.147 "dhgroup": "null", 00:18:44.147 "digest": "sha256", 00:18:44.147 "state": "completed" 00:18:44.147 }, 00:18:44.147 "cntlid": 1, 00:18:44.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:18:44.147 "listen_address": { 00:18:44.147 "adrfam": "IPv4", 00:18:44.147 "traddr": "10.0.0.3", 00:18:44.147 "trsvcid": "4420", 00:18:44.147 "trtype": "TCP" 00:18:44.147 }, 00:18:44.147 "peer_address": { 00:18:44.147 "adrfam": "IPv4", 00:18:44.147 "traddr": "10.0.0.1", 00:18:44.147 "trsvcid": "47914", 00:18:44.147 "trtype": "TCP" 00:18:44.147 }, 00:18:44.147 "qid": 0, 00:18:44.147 "state": "enabled", 00:18:44.147 "thread": "nvmf_tgt_poll_group_000" 00:18:44.147 } 00:18:44.147 ]' 00:18:44.147 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.147 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.147 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.147 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:44.147 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.406 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.406 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.406 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.664 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:18:44.664 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:18:48.882 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.882 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:18:48.882 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.882 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.882 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.882 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.882 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:48.882 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:48.882 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:48.883 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.883 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:48.883 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:48.883 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:48.883 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.883 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.883 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.883 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.883 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.883 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.883 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.883 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.883 00:18:48.883 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.883 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.883 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.883 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.883 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.883 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.883 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.141 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.141 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.141 { 00:18:49.141 "auth": { 00:18:49.141 "dhgroup": "null", 00:18:49.141 "digest": "sha256", 00:18:49.141 "state": "completed" 00:18:49.141 }, 00:18:49.141 "cntlid": 3, 00:18:49.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:18:49.141 "listen_address": { 00:18:49.141 "adrfam": "IPv4", 00:18:49.141 "traddr": "10.0.0.3", 00:18:49.141 "trsvcid": "4420", 00:18:49.141 "trtype": "TCP" 00:18:49.141 }, 00:18:49.141 "peer_address": { 00:18:49.141 "adrfam": "IPv4", 00:18:49.141 "traddr": "10.0.0.1", 00:18:49.141 "trsvcid": "47938", 00:18:49.141 "trtype": "TCP" 00:18:49.141 }, 00:18:49.141 "qid": 0, 00:18:49.141 "state": "enabled", 00:18:49.141 "thread": "nvmf_tgt_poll_group_000" 00:18:49.141 } 00:18:49.141 ]' 00:18:49.141 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.141 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.142 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.142 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:49.142 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.142 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.142 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.142 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.400 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:18:49.400 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:18:49.968 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.968 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:18:49.968 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.968 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.968 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.968 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.968 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:49.968 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:50.228 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:50.228 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.228 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:50.228 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:50.228 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:50.228 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.228 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.228 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.228 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.228 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.228 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.228 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.228 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.488 00:18:50.747 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.747 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.747 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.006 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.007 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.007 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.007 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.007 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.007 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.007 { 00:18:51.007 "auth": { 00:18:51.007 "dhgroup": "null", 00:18:51.007 "digest": "sha256", 00:18:51.007 "state": "completed" 00:18:51.007 }, 00:18:51.007 "cntlid": 5, 00:18:51.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:18:51.007 "listen_address": { 00:18:51.007 "adrfam": "IPv4", 00:18:51.007 "traddr": "10.0.0.3", 00:18:51.007 "trsvcid": "4420", 00:18:51.007 "trtype": "TCP" 00:18:51.007 }, 00:18:51.007 "peer_address": { 00:18:51.007 "adrfam": "IPv4", 00:18:51.007 "traddr": "10.0.0.1", 00:18:51.007 "trsvcid": "47960", 00:18:51.007 "trtype": "TCP" 00:18:51.007 }, 00:18:51.007 "qid": 0, 00:18:51.007 "state": "enabled", 00:18:51.007 "thread": "nvmf_tgt_poll_group_000" 00:18:51.007 } 00:18:51.007 ]' 00:18:51.007 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.007 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.007 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.007 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:51.007 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.007 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.007 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.007 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.577 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:18:51.577 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:18:52.145 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.145 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:18:52.145 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.145 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.145 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.145 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.145 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:52.145 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:52.404 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:52.404 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.404 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:52.404 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:52.404 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:52.404 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.404 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:18:52.404 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.404 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.404 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.404 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:52.404 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:52.404 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:52.663 00:18:52.663 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.663 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.663 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.922 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.922 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.922 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.922 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.922 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.922 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.922 { 00:18:52.922 "auth": { 00:18:52.922 "dhgroup": "null", 00:18:52.922 "digest": "sha256", 00:18:52.922 "state": "completed" 00:18:52.922 }, 00:18:52.922 "cntlid": 7, 00:18:52.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:18:52.922 "listen_address": { 00:18:52.922 "adrfam": "IPv4", 00:18:52.922 "traddr": "10.0.0.3", 00:18:52.922 "trsvcid": "4420", 00:18:52.922 "trtype": "TCP" 00:18:52.922 }, 00:18:52.922 "peer_address": { 00:18:52.922 "adrfam": "IPv4", 00:18:52.922 "traddr": "10.0.0.1", 00:18:52.922 "trsvcid": "58484", 00:18:52.922 "trtype": "TCP" 00:18:52.922 }, 00:18:52.922 "qid": 0, 00:18:52.922 "state": "enabled", 00:18:52.922 "thread": "nvmf_tgt_poll_group_000" 00:18:52.922 } 00:18:52.922 ]' 00:18:52.922 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.922 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.922 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.922 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:52.922 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.181 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.181 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.181 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.440 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:18:53.440 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:18:54.009 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.009 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:18:54.009 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.009 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.009 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.009 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:54.009 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.009 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:54.009 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:54.268 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:54.268 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.268 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:54.268 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:54.268 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:54.268 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.268 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.268 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.268 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.268 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.268 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.268 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.268 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.526 00:18:54.526 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.526 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.526 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.785 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.785 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.785 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.785 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.785 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.785 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.785 { 00:18:54.785 "auth": { 00:18:54.785 "dhgroup": "ffdhe2048", 00:18:54.785 "digest": "sha256", 00:18:54.785 "state": "completed" 00:18:54.785 }, 00:18:54.785 "cntlid": 9, 00:18:54.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:18:54.785 "listen_address": { 00:18:54.785 "adrfam": "IPv4", 00:18:54.785 "traddr": "10.0.0.3", 00:18:54.785 "trsvcid": "4420", 00:18:54.785 "trtype": "TCP" 00:18:54.785 }, 00:18:54.785 "peer_address": { 00:18:54.785 "adrfam": "IPv4", 00:18:54.785 "traddr": "10.0.0.1", 00:18:54.785 "trsvcid": "58532", 00:18:54.785 "trtype": "TCP" 00:18:54.785 }, 00:18:54.785 "qid": 0, 00:18:54.785 "state": "enabled", 00:18:54.785 "thread": "nvmf_tgt_poll_group_000" 00:18:54.785 } 00:18:54.785 ]' 00:18:54.785 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.044 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.044 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.044 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:55.044 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.044 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.044 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.044 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.303 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:18:55.303 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:18:55.870 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.870 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:18:55.870 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.870 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.870 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.870 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.871 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:55.871 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:56.129 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:56.129 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.129 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:56.129 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:56.129 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:56.129 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.129 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.129 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.129 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.129 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.129 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.129 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.129 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.697 00:18:56.697 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.697 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.697 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.956 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.956 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.956 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.956 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.956 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.956 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.956 { 00:18:56.956 "auth": { 00:18:56.956 "dhgroup": "ffdhe2048", 00:18:56.956 "digest": "sha256", 00:18:56.956 "state": "completed" 00:18:56.956 }, 00:18:56.956 "cntlid": 11, 00:18:56.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:18:56.956 "listen_address": { 00:18:56.956 "adrfam": "IPv4", 00:18:56.956 "traddr": "10.0.0.3", 00:18:56.956 "trsvcid": "4420", 00:18:56.956 "trtype": "TCP" 00:18:56.956 }, 00:18:56.956 "peer_address": { 00:18:56.956 "adrfam": "IPv4", 00:18:56.956 "traddr": "10.0.0.1", 00:18:56.956 "trsvcid": "58556", 00:18:56.956 "trtype": "TCP" 00:18:56.956 }, 00:18:56.956 "qid": 0, 00:18:56.956 "state": "enabled", 00:18:56.956 "thread": "nvmf_tgt_poll_group_000" 00:18:56.956 } 00:18:56.956 ]' 00:18:56.956 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.956 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.956 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.956 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:56.956 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.956 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.956 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.956 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.215 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:18:57.215 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:18:57.783 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.783 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:18:57.783 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.783 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.783 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.783 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.783 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:57.783 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:58.041 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:58.041 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.041 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:58.041 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:58.041 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:58.041 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.041 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.041 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.041 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.041 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.041 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.041 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.041 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.607 00:18:58.607 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.607 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.607 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.607 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.607 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.607 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.607 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.607 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.607 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.607 { 00:18:58.607 "auth": { 00:18:58.607 "dhgroup": "ffdhe2048", 00:18:58.607 "digest": "sha256", 00:18:58.607 "state": "completed" 00:18:58.607 }, 00:18:58.607 "cntlid": 13, 00:18:58.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:18:58.607 "listen_address": { 00:18:58.607 "adrfam": "IPv4", 00:18:58.607 "traddr": "10.0.0.3", 00:18:58.607 "trsvcid": "4420", 00:18:58.607 "trtype": "TCP" 00:18:58.607 }, 00:18:58.607 "peer_address": { 00:18:58.607 "adrfam": "IPv4", 00:18:58.607 "traddr": "10.0.0.1", 00:18:58.607 "trsvcid": "58564", 00:18:58.607 "trtype": "TCP" 00:18:58.607 }, 00:18:58.607 "qid": 0, 00:18:58.607 "state": "enabled", 00:18:58.607 "thread": "nvmf_tgt_poll_group_000" 00:18:58.607 } 00:18:58.607 ]' 00:18:58.607 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.865 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.865 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.865 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:58.865 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.865 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.865 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.865 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.124 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:18:59.124 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:00.059 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.060 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.627 00:19:00.627 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.627 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.627 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.885 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.885 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.885 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.885 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.885 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.885 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.885 { 00:19:00.885 "auth": { 00:19:00.885 "dhgroup": "ffdhe2048", 00:19:00.885 "digest": "sha256", 00:19:00.885 "state": "completed" 00:19:00.885 }, 00:19:00.885 "cntlid": 15, 00:19:00.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:00.885 "listen_address": { 00:19:00.885 "adrfam": "IPv4", 00:19:00.885 "traddr": "10.0.0.3", 00:19:00.885 "trsvcid": "4420", 00:19:00.885 "trtype": "TCP" 00:19:00.885 }, 00:19:00.885 "peer_address": { 00:19:00.885 "adrfam": "IPv4", 00:19:00.885 "traddr": "10.0.0.1", 00:19:00.885 "trsvcid": "58582", 00:19:00.885 "trtype": "TCP" 00:19:00.885 }, 00:19:00.885 "qid": 0, 00:19:00.885 "state": "enabled", 00:19:00.885 "thread": "nvmf_tgt_poll_group_000" 00:19:00.885 } 00:19:00.885 ]' 00:19:00.885 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.885 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.885 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.885 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:00.885 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.885 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.885 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.885 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.144 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:19:01.144 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:19:01.709 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.968 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:01.968 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.968 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.968 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.968 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.968 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.968 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:01.968 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:02.227 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:02.227 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.227 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:02.227 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:02.227 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:02.227 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.227 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.227 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.227 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.227 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.227 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.227 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.227 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.486 00:19:02.486 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.486 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.486 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.744 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.744 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.744 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.744 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.744 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.744 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.744 { 00:19:02.744 "auth": { 00:19:02.744 "dhgroup": "ffdhe3072", 00:19:02.744 "digest": "sha256", 00:19:02.744 "state": "completed" 00:19:02.744 }, 00:19:02.744 "cntlid": 17, 00:19:02.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:02.744 "listen_address": { 00:19:02.744 "adrfam": "IPv4", 00:19:02.744 "traddr": "10.0.0.3", 00:19:02.744 "trsvcid": "4420", 00:19:02.744 "trtype": "TCP" 00:19:02.744 }, 00:19:02.744 "peer_address": { 00:19:02.744 "adrfam": "IPv4", 00:19:02.744 "traddr": "10.0.0.1", 00:19:02.744 "trsvcid": "58618", 00:19:02.744 "trtype": "TCP" 00:19:02.744 }, 00:19:02.744 "qid": 0, 00:19:02.744 "state": "enabled", 00:19:02.744 "thread": "nvmf_tgt_poll_group_000" 00:19:02.744 } 00:19:02.744 ]' 00:19:02.744 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.744 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.744 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.003 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:03.003 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.003 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.003 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.003 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.262 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:19:03.262 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:19:03.829 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.829 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:03.829 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.829 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.829 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.829 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.829 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:03.829 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:04.088 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:04.088 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.088 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:04.088 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:04.088 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:04.088 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.088 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.088 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.088 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.088 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.088 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.088 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.088 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.347 00:19:04.347 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.347 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.347 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.915 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.915 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.915 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.915 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.915 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.915 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.915 { 00:19:04.915 "auth": { 00:19:04.915 "dhgroup": "ffdhe3072", 00:19:04.915 "digest": "sha256", 00:19:04.915 "state": "completed" 00:19:04.915 }, 00:19:04.915 "cntlid": 19, 00:19:04.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:04.915 "listen_address": { 00:19:04.915 "adrfam": "IPv4", 00:19:04.915 "traddr": "10.0.0.3", 00:19:04.915 "trsvcid": "4420", 00:19:04.915 "trtype": "TCP" 00:19:04.915 }, 00:19:04.915 "peer_address": { 00:19:04.915 "adrfam": "IPv4", 00:19:04.915 "traddr": "10.0.0.1", 00:19:04.915 "trsvcid": "54750", 00:19:04.915 "trtype": "TCP" 00:19:04.915 }, 00:19:04.915 "qid": 0, 00:19:04.915 "state": "enabled", 00:19:04.915 "thread": "nvmf_tgt_poll_group_000" 00:19:04.915 } 00:19:04.915 ]' 00:19:04.915 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.915 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.915 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.915 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:04.915 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.915 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.915 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.915 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.174 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:19:05.174 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:19:05.742 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.742 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:05.742 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.742 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.742 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.742 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.742 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:05.742 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:06.001 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:06.001 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.001 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:06.001 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:06.001 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:06.001 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.001 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.001 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.001 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.001 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.001 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.001 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.002 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.569 00:19:06.569 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.569 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.569 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.827 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.827 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.827 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.827 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.827 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.827 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.827 { 00:19:06.827 "auth": { 00:19:06.827 "dhgroup": "ffdhe3072", 00:19:06.827 "digest": "sha256", 00:19:06.827 "state": "completed" 00:19:06.827 }, 00:19:06.827 "cntlid": 21, 00:19:06.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:06.827 "listen_address": { 00:19:06.827 "adrfam": "IPv4", 00:19:06.827 "traddr": "10.0.0.3", 00:19:06.827 "trsvcid": "4420", 00:19:06.827 "trtype": "TCP" 00:19:06.827 }, 00:19:06.827 "peer_address": { 00:19:06.827 "adrfam": "IPv4", 00:19:06.827 "traddr": "10.0.0.1", 00:19:06.827 "trsvcid": "54782", 00:19:06.827 "trtype": "TCP" 00:19:06.827 }, 00:19:06.827 "qid": 0, 00:19:06.827 "state": "enabled", 00:19:06.827 "thread": "nvmf_tgt_poll_group_000" 00:19:06.827 } 00:19:06.827 ]' 00:19:06.827 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.827 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.827 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.827 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:06.827 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.827 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.827 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.827 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.085 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:19:07.085 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:19:08.021 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.021 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:08.021 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.021 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.021 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.021 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.021 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:08.021 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:08.279 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:08.279 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.279 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:08.279 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:08.279 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:08.279 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.279 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:19:08.279 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.279 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.279 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.279 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:08.279 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.279 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.536 00:19:08.536 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.536 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.536 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.793 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.793 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.793 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.793 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.793 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.793 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.793 { 00:19:08.793 "auth": { 00:19:08.793 "dhgroup": "ffdhe3072", 00:19:08.793 "digest": "sha256", 00:19:08.793 "state": "completed" 00:19:08.793 }, 00:19:08.793 "cntlid": 23, 00:19:08.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:08.793 "listen_address": { 00:19:08.793 "adrfam": "IPv4", 00:19:08.793 "traddr": "10.0.0.3", 00:19:08.793 "trsvcid": "4420", 00:19:08.793 "trtype": "TCP" 00:19:08.793 }, 00:19:08.793 "peer_address": { 00:19:08.793 "adrfam": "IPv4", 00:19:08.793 "traddr": "10.0.0.1", 00:19:08.793 "trsvcid": "54808", 00:19:08.793 "trtype": "TCP" 00:19:08.793 }, 00:19:08.793 "qid": 0, 00:19:08.793 "state": "enabled", 00:19:08.793 "thread": "nvmf_tgt_poll_group_000" 00:19:08.793 } 00:19:08.793 ]' 00:19:08.793 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.793 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.793 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.051 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:09.051 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.051 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.051 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.051 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.308 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:19:09.309 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:19:09.876 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.876 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:09.876 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.876 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.876 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.876 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.876 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.876 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:09.876 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:10.171 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:10.171 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.171 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:10.171 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:10.171 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:10.171 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.171 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.171 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.171 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.171 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.171 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.171 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.171 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.452 00:19:10.452 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.452 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.452 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.724 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.724 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.724 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.724 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.724 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.724 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.724 { 00:19:10.724 "auth": { 00:19:10.724 "dhgroup": "ffdhe4096", 00:19:10.725 "digest": "sha256", 00:19:10.725 "state": "completed" 00:19:10.725 }, 00:19:10.725 "cntlid": 25, 00:19:10.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:10.725 "listen_address": { 00:19:10.725 "adrfam": "IPv4", 00:19:10.725 "traddr": "10.0.0.3", 00:19:10.725 "trsvcid": "4420", 00:19:10.725 "trtype": "TCP" 00:19:10.725 }, 00:19:10.725 "peer_address": { 00:19:10.725 "adrfam": "IPv4", 00:19:10.725 "traddr": "10.0.0.1", 00:19:10.725 "trsvcid": "54842", 00:19:10.725 "trtype": "TCP" 00:19:10.725 }, 00:19:10.725 "qid": 0, 00:19:10.725 "state": "enabled", 00:19:10.725 "thread": "nvmf_tgt_poll_group_000" 00:19:10.725 } 00:19:10.725 ]' 00:19:10.725 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.725 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.725 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.984 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:10.984 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.984 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.984 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.984 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.243 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:19:11.243 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:19:11.812 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.812 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:11.812 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.812 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.812 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.812 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.812 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.812 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:12.071 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:12.071 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.071 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:12.071 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:12.071 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:12.071 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.071 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.071 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.071 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.071 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.071 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.071 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.071 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.639 00:19:12.639 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.639 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.639 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.897 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.897 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.897 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.897 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.898 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.898 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.898 { 00:19:12.898 "auth": { 00:19:12.898 "dhgroup": "ffdhe4096", 00:19:12.898 "digest": "sha256", 00:19:12.898 "state": "completed" 00:19:12.898 }, 00:19:12.898 "cntlid": 27, 00:19:12.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:12.898 "listen_address": { 00:19:12.898 "adrfam": "IPv4", 00:19:12.898 "traddr": "10.0.0.3", 00:19:12.898 "trsvcid": "4420", 00:19:12.898 "trtype": "TCP" 00:19:12.898 }, 00:19:12.898 "peer_address": { 00:19:12.898 "adrfam": "IPv4", 00:19:12.898 "traddr": "10.0.0.1", 00:19:12.898 "trsvcid": "54854", 00:19:12.898 "trtype": "TCP" 00:19:12.898 }, 00:19:12.898 "qid": 0, 00:19:12.898 "state": "enabled", 00:19:12.898 "thread": "nvmf_tgt_poll_group_000" 00:19:12.898 } 00:19:12.898 ]' 00:19:12.898 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.898 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.898 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.898 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:12.898 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.898 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.898 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.898 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.157 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:19:13.157 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:19:14.093 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.093 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:14.093 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.093 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.093 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.093 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.093 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:14.093 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:14.093 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:14.093 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.093 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:14.093 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:14.093 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:14.093 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.094 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.094 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.094 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.352 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.352 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.352 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.352 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.611 00:19:14.611 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.611 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.611 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.871 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.871 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.871 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.871 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.871 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.871 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.871 { 00:19:14.871 "auth": { 00:19:14.871 "dhgroup": "ffdhe4096", 00:19:14.871 "digest": "sha256", 00:19:14.871 "state": "completed" 00:19:14.871 }, 00:19:14.871 "cntlid": 29, 00:19:14.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:14.871 "listen_address": { 00:19:14.871 "adrfam": "IPv4", 00:19:14.871 "traddr": "10.0.0.3", 00:19:14.871 "trsvcid": "4420", 00:19:14.871 "trtype": "TCP" 00:19:14.871 }, 00:19:14.871 "peer_address": { 00:19:14.871 "adrfam": "IPv4", 00:19:14.871 "traddr": "10.0.0.1", 00:19:14.871 "trsvcid": "59968", 00:19:14.871 "trtype": "TCP" 00:19:14.871 }, 00:19:14.871 "qid": 0, 00:19:14.871 "state": "enabled", 00:19:14.871 "thread": "nvmf_tgt_poll_group_000" 00:19:14.871 } 00:19:14.871 ]' 00:19:14.871 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.871 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.871 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.131 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:15.131 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.131 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.131 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.131 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.390 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:19:15.390 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:19:15.957 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.957 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:15.957 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.957 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.957 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.957 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.957 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:15.957 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:16.216 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:16.216 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.216 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:16.216 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:16.216 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:16.216 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.216 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:19:16.216 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.216 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.216 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.216 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:16.216 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:16.216 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:16.783 00:19:16.783 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.783 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.783 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.042 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.042 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.042 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.042 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.042 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.042 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.042 { 00:19:17.042 "auth": { 00:19:17.042 "dhgroup": "ffdhe4096", 00:19:17.042 "digest": "sha256", 00:19:17.042 "state": "completed" 00:19:17.042 }, 00:19:17.042 "cntlid": 31, 00:19:17.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:17.043 "listen_address": { 00:19:17.043 "adrfam": "IPv4", 00:19:17.043 "traddr": "10.0.0.3", 00:19:17.043 "trsvcid": "4420", 00:19:17.043 "trtype": "TCP" 00:19:17.043 }, 00:19:17.043 "peer_address": { 00:19:17.043 "adrfam": "IPv4", 00:19:17.043 "traddr": "10.0.0.1", 00:19:17.043 "trsvcid": "59990", 00:19:17.043 "trtype": "TCP" 00:19:17.043 }, 00:19:17.043 "qid": 0, 00:19:17.043 "state": "enabled", 00:19:17.043 "thread": "nvmf_tgt_poll_group_000" 00:19:17.043 } 00:19:17.043 ]' 00:19:17.043 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.043 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.043 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.043 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:17.043 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.043 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.043 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.043 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.302 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:19:17.302 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:19:17.868 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.868 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:17.868 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.868 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.868 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.868 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.868 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.868 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:17.868 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.125 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:18.125 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.125 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:18.125 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:18.125 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:18.125 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.125 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.125 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.125 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.125 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.125 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.125 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.125 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.690 00:19:18.690 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.690 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.690 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.948 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.948 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.948 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.948 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.948 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.948 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.948 { 00:19:18.948 "auth": { 00:19:18.948 "dhgroup": "ffdhe6144", 00:19:18.948 "digest": "sha256", 00:19:18.948 "state": "completed" 00:19:18.948 }, 00:19:18.948 "cntlid": 33, 00:19:18.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:18.948 "listen_address": { 00:19:18.948 "adrfam": "IPv4", 00:19:18.948 "traddr": "10.0.0.3", 00:19:18.948 "trsvcid": "4420", 00:19:18.948 "trtype": "TCP" 00:19:18.948 }, 00:19:18.948 "peer_address": { 00:19:18.948 "adrfam": "IPv4", 00:19:18.948 "traddr": "10.0.0.1", 00:19:18.948 "trsvcid": "60036", 00:19:18.948 "trtype": "TCP" 00:19:18.948 }, 00:19:18.948 "qid": 0, 00:19:18.948 "state": "enabled", 00:19:18.948 "thread": "nvmf_tgt_poll_group_000" 00:19:18.948 } 00:19:18.948 ]' 00:19:18.948 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.948 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.948 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.948 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:18.948 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.207 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.207 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.207 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.466 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:19:19.466 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:19:20.031 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.031 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:20.031 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.031 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.031 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.031 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.032 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.032 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.290 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:20.290 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.290 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:20.290 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:20.290 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:20.290 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.290 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.290 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.290 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.290 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.290 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.290 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.290 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.547 00:19:20.547 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.547 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.547 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.113 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.113 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.113 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.113 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.113 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.113 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.113 { 00:19:21.113 "auth": { 00:19:21.113 "dhgroup": "ffdhe6144", 00:19:21.113 "digest": "sha256", 00:19:21.113 "state": "completed" 00:19:21.113 }, 00:19:21.113 "cntlid": 35, 00:19:21.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:21.113 "listen_address": { 00:19:21.113 "adrfam": "IPv4", 00:19:21.113 "traddr": "10.0.0.3", 00:19:21.113 "trsvcid": "4420", 00:19:21.113 "trtype": "TCP" 00:19:21.113 }, 00:19:21.113 "peer_address": { 00:19:21.113 "adrfam": "IPv4", 00:19:21.113 "traddr": "10.0.0.1", 00:19:21.113 "trsvcid": "60076", 00:19:21.113 "trtype": "TCP" 00:19:21.113 }, 00:19:21.113 "qid": 0, 00:19:21.113 "state": "enabled", 00:19:21.113 "thread": "nvmf_tgt_poll_group_000" 00:19:21.113 } 00:19:21.113 ]' 00:19:21.113 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.113 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.113 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.113 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:21.113 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.113 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.113 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.113 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.371 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:19:21.371 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:19:21.937 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.937 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:21.937 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.937 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.937 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.937 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.937 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:21.937 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:22.195 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:22.195 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.195 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:22.195 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:22.195 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:22.195 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.195 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.195 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.195 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.195 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.195 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.195 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.195 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.762 00:19:22.762 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.762 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.762 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.020 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.020 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.020 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.020 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.020 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.020 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.020 { 00:19:23.020 "auth": { 00:19:23.020 "dhgroup": "ffdhe6144", 00:19:23.020 "digest": "sha256", 00:19:23.020 "state": "completed" 00:19:23.020 }, 00:19:23.020 "cntlid": 37, 00:19:23.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:23.020 "listen_address": { 00:19:23.020 "adrfam": "IPv4", 00:19:23.020 "traddr": "10.0.0.3", 00:19:23.020 "trsvcid": "4420", 00:19:23.020 "trtype": "TCP" 00:19:23.020 }, 00:19:23.020 "peer_address": { 00:19:23.020 "adrfam": "IPv4", 00:19:23.021 "traddr": "10.0.0.1", 00:19:23.021 "trsvcid": "60118", 00:19:23.021 "trtype": "TCP" 00:19:23.021 }, 00:19:23.021 "qid": 0, 00:19:23.021 "state": "enabled", 00:19:23.021 "thread": "nvmf_tgt_poll_group_000" 00:19:23.021 } 00:19:23.021 ]' 00:19:23.021 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.021 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.021 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.021 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:23.021 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.021 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.021 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.021 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.279 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:19:23.279 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:24.216 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:24.784 00:19:24.784 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.784 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.784 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.043 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.043 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.043 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.043 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.043 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.043 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.043 { 00:19:25.043 "auth": { 00:19:25.043 "dhgroup": "ffdhe6144", 00:19:25.043 "digest": "sha256", 00:19:25.043 "state": "completed" 00:19:25.043 }, 00:19:25.043 "cntlid": 39, 00:19:25.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:25.043 "listen_address": { 00:19:25.043 "adrfam": "IPv4", 00:19:25.043 "traddr": "10.0.0.3", 00:19:25.043 "trsvcid": "4420", 00:19:25.043 "trtype": "TCP" 00:19:25.043 }, 00:19:25.043 "peer_address": { 00:19:25.043 "adrfam": "IPv4", 00:19:25.043 "traddr": "10.0.0.1", 00:19:25.043 "trsvcid": "45680", 00:19:25.043 "trtype": "TCP" 00:19:25.043 }, 00:19:25.043 "qid": 0, 00:19:25.043 "state": "enabled", 00:19:25.043 "thread": "nvmf_tgt_poll_group_000" 00:19:25.043 } 00:19:25.043 ]' 00:19:25.044 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.044 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.044 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.044 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:25.044 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.303 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.303 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.303 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.562 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:19:25.562 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:19:26.130 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.130 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:26.130 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.130 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.130 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.130 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.130 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.130 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:26.130 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:26.390 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:26.390 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.390 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:26.390 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:26.390 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:26.390 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.390 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.390 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.390 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.390 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.390 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.390 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.390 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.958 00:19:26.958 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.958 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.958 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.526 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.526 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.526 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.526 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.526 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.526 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.526 { 00:19:27.526 "auth": { 00:19:27.526 "dhgroup": "ffdhe8192", 00:19:27.526 "digest": "sha256", 00:19:27.526 "state": "completed" 00:19:27.526 }, 00:19:27.526 "cntlid": 41, 00:19:27.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:27.526 "listen_address": { 00:19:27.526 "adrfam": "IPv4", 00:19:27.526 "traddr": "10.0.0.3", 00:19:27.526 "trsvcid": "4420", 00:19:27.526 "trtype": "TCP" 00:19:27.526 }, 00:19:27.526 "peer_address": { 00:19:27.526 "adrfam": "IPv4", 00:19:27.526 "traddr": "10.0.0.1", 00:19:27.526 "trsvcid": "45700", 00:19:27.526 "trtype": "TCP" 00:19:27.526 }, 00:19:27.526 "qid": 0, 00:19:27.526 "state": "enabled", 00:19:27.526 "thread": "nvmf_tgt_poll_group_000" 00:19:27.526 } 00:19:27.526 ]' 00:19:27.526 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.526 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.526 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.526 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:27.526 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.526 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.526 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.527 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.785 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:19:27.785 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:19:28.351 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.609 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:28.609 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.609 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.609 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.609 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.609 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:28.609 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:28.609 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:28.609 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.609 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:28.609 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:28.609 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:28.609 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.609 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.609 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.609 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.609 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.609 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.609 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.610 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.544 00:19:29.544 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.544 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.544 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.544 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.544 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.544 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.544 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.544 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.544 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.544 { 00:19:29.544 "auth": { 00:19:29.544 "dhgroup": "ffdhe8192", 00:19:29.544 "digest": "sha256", 00:19:29.544 "state": "completed" 00:19:29.544 }, 00:19:29.544 "cntlid": 43, 00:19:29.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:29.544 "listen_address": { 00:19:29.544 "adrfam": "IPv4", 00:19:29.544 "traddr": "10.0.0.3", 00:19:29.544 "trsvcid": "4420", 00:19:29.544 "trtype": "TCP" 00:19:29.544 }, 00:19:29.544 "peer_address": { 00:19:29.544 "adrfam": "IPv4", 00:19:29.544 "traddr": "10.0.0.1", 00:19:29.544 "trsvcid": "45738", 00:19:29.544 "trtype": "TCP" 00:19:29.544 }, 00:19:29.544 "qid": 0, 00:19:29.544 "state": "enabled", 00:19:29.544 "thread": "nvmf_tgt_poll_group_000" 00:19:29.544 } 00:19:29.544 ]' 00:19:29.544 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.544 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.544 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.802 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:29.802 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.802 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.802 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.803 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.061 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:19:30.061 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:19:30.629 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.629 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:30.629 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.629 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.629 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.629 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.629 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:30.629 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:30.888 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:30.888 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.888 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:30.888 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:30.888 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:30.888 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.888 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.888 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.888 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.888 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.888 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.888 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.888 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.456 00:19:31.456 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.456 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.456 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.715 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.715 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.715 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.715 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.715 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.715 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.715 { 00:19:31.715 "auth": { 00:19:31.715 "dhgroup": "ffdhe8192", 00:19:31.715 "digest": "sha256", 00:19:31.715 "state": "completed" 00:19:31.715 }, 00:19:31.715 "cntlid": 45, 00:19:31.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:31.715 "listen_address": { 00:19:31.715 "adrfam": "IPv4", 00:19:31.715 "traddr": "10.0.0.3", 00:19:31.715 "trsvcid": "4420", 00:19:31.715 "trtype": "TCP" 00:19:31.715 }, 00:19:31.715 "peer_address": { 00:19:31.715 "adrfam": "IPv4", 00:19:31.715 "traddr": "10.0.0.1", 00:19:31.715 "trsvcid": "45768", 00:19:31.715 "trtype": "TCP" 00:19:31.716 }, 00:19:31.716 "qid": 0, 00:19:31.716 "state": "enabled", 00:19:31.716 "thread": "nvmf_tgt_poll_group_000" 00:19:31.716 } 00:19:31.716 ]' 00:19:31.716 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.716 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.716 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.974 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:31.974 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.974 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.974 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.975 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.233 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:19:32.233 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:19:32.800 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.800 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:32.800 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.800 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.800 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.800 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.800 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:32.800 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:33.059 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:33.059 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.059 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:33.059 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:33.059 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:33.059 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.059 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:19:33.059 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.059 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.059 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.059 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:33.059 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:33.059 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:33.626 00:19:33.626 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.626 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.626 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.885 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.885 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.885 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.885 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.885 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.885 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.885 { 00:19:33.885 "auth": { 00:19:33.885 "dhgroup": "ffdhe8192", 00:19:33.885 "digest": "sha256", 00:19:33.885 "state": "completed" 00:19:33.885 }, 00:19:33.885 "cntlid": 47, 00:19:33.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:33.885 "listen_address": { 00:19:33.885 "adrfam": "IPv4", 00:19:33.885 "traddr": "10.0.0.3", 00:19:33.885 "trsvcid": "4420", 00:19:33.885 "trtype": "TCP" 00:19:33.885 }, 00:19:33.885 "peer_address": { 00:19:33.885 "adrfam": "IPv4", 00:19:33.885 "traddr": "10.0.0.1", 00:19:33.885 "trsvcid": "51502", 00:19:33.885 "trtype": "TCP" 00:19:33.885 }, 00:19:33.885 "qid": 0, 00:19:33.885 "state": "enabled", 00:19:33.885 "thread": "nvmf_tgt_poll_group_000" 00:19:33.885 } 00:19:33.885 ]' 00:19:33.885 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.885 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.885 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.144 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:34.144 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.144 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.144 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.144 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.402 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:19:34.402 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:19:34.969 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.969 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:34.969 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.969 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.969 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.969 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:34.969 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.969 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.970 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:34.970 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:35.229 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:35.229 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.229 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:35.229 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:35.229 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:35.229 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.229 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.229 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.229 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.229 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.229 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.229 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.229 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.487 00:19:35.487 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.487 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.487 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.746 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.746 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.746 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.746 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.746 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.746 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.746 { 00:19:35.746 "auth": { 00:19:35.746 "dhgroup": "null", 00:19:35.746 "digest": "sha384", 00:19:35.746 "state": "completed" 00:19:35.746 }, 00:19:35.746 "cntlid": 49, 00:19:35.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:35.746 "listen_address": { 00:19:35.746 "adrfam": "IPv4", 00:19:35.746 "traddr": "10.0.0.3", 00:19:35.746 "trsvcid": "4420", 00:19:35.746 "trtype": "TCP" 00:19:35.746 }, 00:19:35.746 "peer_address": { 00:19:35.746 "adrfam": "IPv4", 00:19:35.746 "traddr": "10.0.0.1", 00:19:35.746 "trsvcid": "51524", 00:19:35.746 "trtype": "TCP" 00:19:35.746 }, 00:19:35.746 "qid": 0, 00:19:35.746 "state": "enabled", 00:19:35.746 "thread": "nvmf_tgt_poll_group_000" 00:19:35.746 } 00:19:35.746 ]' 00:19:35.746 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.746 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:35.746 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.746 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:35.746 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.746 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.746 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.746 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.313 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:19:36.314 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:19:36.573 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.573 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:36.573 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.573 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.573 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.573 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.573 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:36.573 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:36.846 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:36.846 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.846 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:36.846 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:36.846 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:36.846 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.846 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.847 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.847 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.847 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.847 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.847 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.847 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.456 00:19:37.456 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.456 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.456 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.456 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.456 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.456 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.456 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.456 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.456 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.456 { 00:19:37.456 "auth": { 00:19:37.456 "dhgroup": "null", 00:19:37.456 "digest": "sha384", 00:19:37.456 "state": "completed" 00:19:37.456 }, 00:19:37.456 "cntlid": 51, 00:19:37.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:37.456 "listen_address": { 00:19:37.456 "adrfam": "IPv4", 00:19:37.456 "traddr": "10.0.0.3", 00:19:37.456 "trsvcid": "4420", 00:19:37.456 "trtype": "TCP" 00:19:37.456 }, 00:19:37.456 "peer_address": { 00:19:37.456 "adrfam": "IPv4", 00:19:37.456 "traddr": "10.0.0.1", 00:19:37.456 "trsvcid": "51560", 00:19:37.456 "trtype": "TCP" 00:19:37.456 }, 00:19:37.456 "qid": 0, 00:19:37.456 "state": "enabled", 00:19:37.456 "thread": "nvmf_tgt_poll_group_000" 00:19:37.456 } 00:19:37.456 ]' 00:19:37.456 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.456 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.456 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.726 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:37.726 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.726 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.726 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.726 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.984 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:19:37.984 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:19:38.550 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.550 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:38.550 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.550 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.550 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.550 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.550 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:38.550 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:38.807 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:38.807 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.807 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:38.807 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:38.807 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:38.807 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.807 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.807 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.807 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.808 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.808 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.808 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.808 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.066 00:19:39.066 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.066 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.066 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.324 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.324 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.324 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.324 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.324 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.324 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.324 { 00:19:39.324 "auth": { 00:19:39.324 "dhgroup": "null", 00:19:39.324 "digest": "sha384", 00:19:39.324 "state": "completed" 00:19:39.324 }, 00:19:39.324 "cntlid": 53, 00:19:39.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:39.324 "listen_address": { 00:19:39.324 "adrfam": "IPv4", 00:19:39.324 "traddr": "10.0.0.3", 00:19:39.324 "trsvcid": "4420", 00:19:39.324 "trtype": "TCP" 00:19:39.324 }, 00:19:39.324 "peer_address": { 00:19:39.324 "adrfam": "IPv4", 00:19:39.324 "traddr": "10.0.0.1", 00:19:39.324 "trsvcid": "51582", 00:19:39.324 "trtype": "TCP" 00:19:39.324 }, 00:19:39.324 "qid": 0, 00:19:39.324 "state": "enabled", 00:19:39.324 "thread": "nvmf_tgt_poll_group_000" 00:19:39.324 } 00:19:39.324 ]' 00:19:39.324 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.324 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.324 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.324 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:39.324 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.583 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.583 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.583 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.842 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:19:39.842 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:19:40.101 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.359 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:40.359 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.359 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.359 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.359 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.359 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:40.359 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:40.618 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:40.618 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.618 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:40.618 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:40.618 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:40.618 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.618 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:19:40.618 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.618 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.618 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.618 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:40.618 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.618 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.877 00:19:40.877 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.877 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.877 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.136 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.136 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.136 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.136 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.136 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.136 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.136 { 00:19:41.136 "auth": { 00:19:41.136 "dhgroup": "null", 00:19:41.136 "digest": "sha384", 00:19:41.136 "state": "completed" 00:19:41.136 }, 00:19:41.136 "cntlid": 55, 00:19:41.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:41.136 "listen_address": { 00:19:41.136 "adrfam": "IPv4", 00:19:41.136 "traddr": "10.0.0.3", 00:19:41.136 "trsvcid": "4420", 00:19:41.136 "trtype": "TCP" 00:19:41.136 }, 00:19:41.136 "peer_address": { 00:19:41.136 "adrfam": "IPv4", 00:19:41.136 "traddr": "10.0.0.1", 00:19:41.136 "trsvcid": "51606", 00:19:41.136 "trtype": "TCP" 00:19:41.136 }, 00:19:41.136 "qid": 0, 00:19:41.136 "state": "enabled", 00:19:41.136 "thread": "nvmf_tgt_poll_group_000" 00:19:41.136 } 00:19:41.136 ]' 00:19:41.136 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.136 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.136 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.136 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:41.136 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.136 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.136 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.136 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.394 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:19:41.394 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:19:41.962 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.962 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:41.962 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.962 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.962 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.962 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.962 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.962 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:41.962 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:42.220 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:42.220 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.220 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:42.220 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:42.220 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:42.220 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.220 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.220 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.220 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.221 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.221 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.221 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.221 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.480 00:19:42.480 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.480 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.480 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.738 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.738 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.738 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.738 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.996 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.996 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.996 { 00:19:42.996 "auth": { 00:19:42.996 "dhgroup": "ffdhe2048", 00:19:42.996 "digest": "sha384", 00:19:42.996 "state": "completed" 00:19:42.996 }, 00:19:42.996 "cntlid": 57, 00:19:42.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:42.996 "listen_address": { 00:19:42.996 "adrfam": "IPv4", 00:19:42.996 "traddr": "10.0.0.3", 00:19:42.996 "trsvcid": "4420", 00:19:42.996 "trtype": "TCP" 00:19:42.996 }, 00:19:42.996 "peer_address": { 00:19:42.996 "adrfam": "IPv4", 00:19:42.996 "traddr": "10.0.0.1", 00:19:42.996 "trsvcid": "51636", 00:19:42.996 "trtype": "TCP" 00:19:42.996 }, 00:19:42.996 "qid": 0, 00:19:42.996 "state": "enabled", 00:19:42.996 "thread": "nvmf_tgt_poll_group_000" 00:19:42.996 } 00:19:42.996 ]' 00:19:42.996 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.996 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.996 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.996 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:42.996 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.996 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.996 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.997 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.256 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:19:43.256 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:19:44.196 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.197 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.764 00:19:44.764 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.764 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.764 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.023 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.023 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.023 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.023 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.023 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.023 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.023 { 00:19:45.023 "auth": { 00:19:45.023 "dhgroup": "ffdhe2048", 00:19:45.023 "digest": "sha384", 00:19:45.023 "state": "completed" 00:19:45.023 }, 00:19:45.023 "cntlid": 59, 00:19:45.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:45.023 "listen_address": { 00:19:45.023 "adrfam": "IPv4", 00:19:45.023 "traddr": "10.0.0.3", 00:19:45.023 "trsvcid": "4420", 00:19:45.023 "trtype": "TCP" 00:19:45.023 }, 00:19:45.023 "peer_address": { 00:19:45.023 "adrfam": "IPv4", 00:19:45.023 "traddr": "10.0.0.1", 00:19:45.023 "trsvcid": "39972", 00:19:45.023 "trtype": "TCP" 00:19:45.023 }, 00:19:45.023 "qid": 0, 00:19:45.023 "state": "enabled", 00:19:45.023 "thread": "nvmf_tgt_poll_group_000" 00:19:45.023 } 00:19:45.023 ]' 00:19:45.023 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.023 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.023 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.023 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:45.023 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.023 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.023 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.023 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.282 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:19:45.282 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:19:45.849 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.849 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:45.849 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.849 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.849 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.849 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.849 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:45.849 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:46.108 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:19:46.108 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.108 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:46.108 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:46.108 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:46.108 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.108 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.108 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.108 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.108 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.108 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.108 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.108 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.367 00:19:46.626 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.626 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.626 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.884 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.884 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.884 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.884 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.884 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.884 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.884 { 00:19:46.884 "auth": { 00:19:46.884 "dhgroup": "ffdhe2048", 00:19:46.884 "digest": "sha384", 00:19:46.884 "state": "completed" 00:19:46.884 }, 00:19:46.884 "cntlid": 61, 00:19:46.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:46.884 "listen_address": { 00:19:46.884 "adrfam": "IPv4", 00:19:46.884 "traddr": "10.0.0.3", 00:19:46.884 "trsvcid": "4420", 00:19:46.884 "trtype": "TCP" 00:19:46.884 }, 00:19:46.884 "peer_address": { 00:19:46.884 "adrfam": "IPv4", 00:19:46.884 "traddr": "10.0.0.1", 00:19:46.884 "trsvcid": "40012", 00:19:46.884 "trtype": "TCP" 00:19:46.884 }, 00:19:46.884 "qid": 0, 00:19:46.884 "state": "enabled", 00:19:46.884 "thread": "nvmf_tgt_poll_group_000" 00:19:46.884 } 00:19:46.884 ]' 00:19:46.884 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.885 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.885 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.885 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:46.885 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.885 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.885 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.885 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.452 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:19:47.452 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:19:48.042 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.043 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:48.043 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.043 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.043 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.043 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.043 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.043 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.301 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:19:48.301 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.301 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:48.301 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:48.301 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:48.301 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.301 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:19:48.301 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.301 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.301 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.301 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:48.301 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.301 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.559 00:19:48.559 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.559 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.559 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.818 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.818 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.818 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.818 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.818 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.818 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.818 { 00:19:48.818 "auth": { 00:19:48.818 "dhgroup": "ffdhe2048", 00:19:48.818 "digest": "sha384", 00:19:48.818 "state": "completed" 00:19:48.818 }, 00:19:48.818 "cntlid": 63, 00:19:48.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:48.818 "listen_address": { 00:19:48.818 "adrfam": "IPv4", 00:19:48.818 "traddr": "10.0.0.3", 00:19:48.818 "trsvcid": "4420", 00:19:48.818 "trtype": "TCP" 00:19:48.818 }, 00:19:48.818 "peer_address": { 00:19:48.818 "adrfam": "IPv4", 00:19:48.818 "traddr": "10.0.0.1", 00:19:48.818 "trsvcid": "40050", 00:19:48.818 "trtype": "TCP" 00:19:48.818 }, 00:19:48.818 "qid": 0, 00:19:48.818 "state": "enabled", 00:19:48.818 "thread": "nvmf_tgt_poll_group_000" 00:19:48.819 } 00:19:48.819 ]' 00:19:48.819 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.077 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.077 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.077 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:49.077 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.077 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.077 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.077 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.336 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:19:49.336 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:19:49.903 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.903 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:49.903 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.903 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.162 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.162 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:50.162 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.162 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:50.162 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:50.420 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:19:50.420 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.420 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:50.420 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:50.420 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:50.420 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.421 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.421 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.421 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.421 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.421 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.421 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.421 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.678 00:19:50.678 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.678 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.678 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.936 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.936 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.936 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.936 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.936 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.936 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.936 { 00:19:50.936 "auth": { 00:19:50.936 "dhgroup": "ffdhe3072", 00:19:50.936 "digest": "sha384", 00:19:50.936 "state": "completed" 00:19:50.936 }, 00:19:50.936 "cntlid": 65, 00:19:50.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:50.937 "listen_address": { 00:19:50.937 "adrfam": "IPv4", 00:19:50.937 "traddr": "10.0.0.3", 00:19:50.937 "trsvcid": "4420", 00:19:50.937 "trtype": "TCP" 00:19:50.937 }, 00:19:50.937 "peer_address": { 00:19:50.937 "adrfam": "IPv4", 00:19:50.937 "traddr": "10.0.0.1", 00:19:50.937 "trsvcid": "40074", 00:19:50.937 "trtype": "TCP" 00:19:50.937 }, 00:19:50.937 "qid": 0, 00:19:50.937 "state": "enabled", 00:19:50.937 "thread": "nvmf_tgt_poll_group_000" 00:19:50.937 } 00:19:50.937 ]' 00:19:50.937 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.937 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.937 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.937 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:50.937 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.937 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.937 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.937 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.195 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:19:51.195 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.131 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.698 00:19:52.698 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.698 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.698 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.957 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.957 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.957 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.957 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.957 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.957 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.957 { 00:19:52.957 "auth": { 00:19:52.957 "dhgroup": "ffdhe3072", 00:19:52.957 "digest": "sha384", 00:19:52.957 "state": "completed" 00:19:52.957 }, 00:19:52.957 "cntlid": 67, 00:19:52.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:52.957 "listen_address": { 00:19:52.957 "adrfam": "IPv4", 00:19:52.957 "traddr": "10.0.0.3", 00:19:52.957 "trsvcid": "4420", 00:19:52.957 "trtype": "TCP" 00:19:52.957 }, 00:19:52.957 "peer_address": { 00:19:52.957 "adrfam": "IPv4", 00:19:52.957 "traddr": "10.0.0.1", 00:19:52.957 "trsvcid": "40110", 00:19:52.957 "trtype": "TCP" 00:19:52.957 }, 00:19:52.957 "qid": 0, 00:19:52.957 "state": "enabled", 00:19:52.957 "thread": "nvmf_tgt_poll_group_000" 00:19:52.957 } 00:19:52.957 ]' 00:19:52.957 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.957 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.957 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.957 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:52.957 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.216 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.216 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.216 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.216 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:19:53.216 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.151 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.716 00:19:54.716 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.716 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.716 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.975 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.975 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.975 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.975 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.975 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.975 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.975 { 00:19:54.975 "auth": { 00:19:54.975 "dhgroup": "ffdhe3072", 00:19:54.975 "digest": "sha384", 00:19:54.975 "state": "completed" 00:19:54.975 }, 00:19:54.975 "cntlid": 69, 00:19:54.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:54.975 "listen_address": { 00:19:54.975 "adrfam": "IPv4", 00:19:54.975 "traddr": "10.0.0.3", 00:19:54.975 "trsvcid": "4420", 00:19:54.975 "trtype": "TCP" 00:19:54.975 }, 00:19:54.975 "peer_address": { 00:19:54.975 "adrfam": "IPv4", 00:19:54.975 "traddr": "10.0.0.1", 00:19:54.975 "trsvcid": "52120", 00:19:54.975 "trtype": "TCP" 00:19:54.975 }, 00:19:54.975 "qid": 0, 00:19:54.975 "state": "enabled", 00:19:54.975 "thread": "nvmf_tgt_poll_group_000" 00:19:54.975 } 00:19:54.975 ]' 00:19:54.975 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.975 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.975 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.975 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:54.975 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.975 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.975 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.975 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.233 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:19:55.233 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:19:55.799 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.799 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:55.799 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.800 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.800 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.800 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.800 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:55.800 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:56.365 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:56.365 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.365 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:56.365 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:56.365 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:56.365 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.365 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:19:56.365 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.365 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.365 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.365 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:56.365 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.365 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.622 00:19:56.622 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.622 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.622 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.880 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.880 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.880 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.880 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.880 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.880 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.880 { 00:19:56.880 "auth": { 00:19:56.880 "dhgroup": "ffdhe3072", 00:19:56.880 "digest": "sha384", 00:19:56.880 "state": "completed" 00:19:56.880 }, 00:19:56.880 "cntlid": 71, 00:19:56.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:56.880 "listen_address": { 00:19:56.880 "adrfam": "IPv4", 00:19:56.880 "traddr": "10.0.0.3", 00:19:56.880 "trsvcid": "4420", 00:19:56.880 "trtype": "TCP" 00:19:56.880 }, 00:19:56.880 "peer_address": { 00:19:56.880 "adrfam": "IPv4", 00:19:56.880 "traddr": "10.0.0.1", 00:19:56.880 "trsvcid": "52144", 00:19:56.880 "trtype": "TCP" 00:19:56.880 }, 00:19:56.880 "qid": 0, 00:19:56.880 "state": "enabled", 00:19:56.880 "thread": "nvmf_tgt_poll_group_000" 00:19:56.880 } 00:19:56.880 ]' 00:19:56.880 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.880 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.880 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.137 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:57.137 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.137 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.137 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.137 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.396 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:19:57.396 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:19:57.960 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.960 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:57.960 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.960 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.960 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.960 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.960 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.960 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:57.960 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:58.218 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:58.218 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.218 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:58.218 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:58.218 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:58.218 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.218 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.218 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.218 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.218 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.218 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.218 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.218 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.476 00:19:58.476 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.476 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.476 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.733 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.733 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.733 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.733 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.733 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.733 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.733 { 00:19:58.733 "auth": { 00:19:58.733 "dhgroup": "ffdhe4096", 00:19:58.733 "digest": "sha384", 00:19:58.733 "state": "completed" 00:19:58.733 }, 00:19:58.733 "cntlid": 73, 00:19:58.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:19:58.733 "listen_address": { 00:19:58.733 "adrfam": "IPv4", 00:19:58.733 "traddr": "10.0.0.3", 00:19:58.733 "trsvcid": "4420", 00:19:58.733 "trtype": "TCP" 00:19:58.733 }, 00:19:58.733 "peer_address": { 00:19:58.733 "adrfam": "IPv4", 00:19:58.733 "traddr": "10.0.0.1", 00:19:58.733 "trsvcid": "52188", 00:19:58.733 "trtype": "TCP" 00:19:58.733 }, 00:19:58.733 "qid": 0, 00:19:58.733 "state": "enabled", 00:19:58.733 "thread": "nvmf_tgt_poll_group_000" 00:19:58.733 } 00:19:58.733 ]' 00:19:58.733 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.990 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.990 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.990 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:58.990 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.990 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.990 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.990 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.248 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:19:59.248 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:19:59.812 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.812 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:19:59.812 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.812 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.812 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.812 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.812 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:59.812 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:00.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:00.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:00.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:00.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:00.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.636 00:20:00.636 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.636 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.636 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.894 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.894 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.894 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.894 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.894 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.894 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.894 { 00:20:00.894 "auth": { 00:20:00.894 "dhgroup": "ffdhe4096", 00:20:00.894 "digest": "sha384", 00:20:00.894 "state": "completed" 00:20:00.894 }, 00:20:00.894 "cntlid": 75, 00:20:00.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:00.894 "listen_address": { 00:20:00.894 "adrfam": "IPv4", 00:20:00.894 "traddr": "10.0.0.3", 00:20:00.894 "trsvcid": "4420", 00:20:00.894 "trtype": "TCP" 00:20:00.894 }, 00:20:00.894 "peer_address": { 00:20:00.894 "adrfam": "IPv4", 00:20:00.894 "traddr": "10.0.0.1", 00:20:00.894 "trsvcid": "52202", 00:20:00.894 "trtype": "TCP" 00:20:00.894 }, 00:20:00.894 "qid": 0, 00:20:00.894 "state": "enabled", 00:20:00.894 "thread": "nvmf_tgt_poll_group_000" 00:20:00.894 } 00:20:00.894 ]' 00:20:00.894 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.894 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.894 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.894 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:00.894 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.894 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.894 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.894 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.153 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:20:01.153 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:20:01.720 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.720 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:01.720 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.720 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.720 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.720 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.720 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.720 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.978 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:01.978 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.978 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:01.978 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:01.978 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:01.978 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.978 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.978 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.978 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.978 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.978 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.978 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.979 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.547 00:20:02.547 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.547 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.547 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.805 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.805 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.805 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.805 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.805 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.805 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.805 { 00:20:02.805 "auth": { 00:20:02.805 "dhgroup": "ffdhe4096", 00:20:02.805 "digest": "sha384", 00:20:02.805 "state": "completed" 00:20:02.805 }, 00:20:02.805 "cntlid": 77, 00:20:02.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:02.805 "listen_address": { 00:20:02.805 "adrfam": "IPv4", 00:20:02.805 "traddr": "10.0.0.3", 00:20:02.805 "trsvcid": "4420", 00:20:02.805 "trtype": "TCP" 00:20:02.805 }, 00:20:02.805 "peer_address": { 00:20:02.805 "adrfam": "IPv4", 00:20:02.805 "traddr": "10.0.0.1", 00:20:02.805 "trsvcid": "52226", 00:20:02.805 "trtype": "TCP" 00:20:02.805 }, 00:20:02.805 "qid": 0, 00:20:02.805 "state": "enabled", 00:20:02.805 "thread": "nvmf_tgt_poll_group_000" 00:20:02.805 } 00:20:02.805 ]' 00:20:02.805 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.805 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.805 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.805 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:02.805 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.062 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.062 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.062 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.319 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:20:03.319 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:20:03.917 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.917 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:03.917 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.917 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.917 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.917 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.917 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:03.917 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:04.175 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:04.175 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.175 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:04.175 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:04.175 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:04.175 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.175 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:20:04.175 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.175 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.175 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.175 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:04.175 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.175 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.434 00:20:04.434 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.434 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.434 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.003 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.003 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.003 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.003 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.003 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.003 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.003 { 00:20:05.003 "auth": { 00:20:05.003 "dhgroup": "ffdhe4096", 00:20:05.003 "digest": "sha384", 00:20:05.003 "state": "completed" 00:20:05.003 }, 00:20:05.003 "cntlid": 79, 00:20:05.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:05.003 "listen_address": { 00:20:05.003 "adrfam": "IPv4", 00:20:05.003 "traddr": "10.0.0.3", 00:20:05.003 "trsvcid": "4420", 00:20:05.003 "trtype": "TCP" 00:20:05.003 }, 00:20:05.003 "peer_address": { 00:20:05.003 "adrfam": "IPv4", 00:20:05.003 "traddr": "10.0.0.1", 00:20:05.003 "trsvcid": "37618", 00:20:05.003 "trtype": "TCP" 00:20:05.003 }, 00:20:05.003 "qid": 0, 00:20:05.003 "state": "enabled", 00:20:05.003 "thread": "nvmf_tgt_poll_group_000" 00:20:05.003 } 00:20:05.003 ]' 00:20:05.003 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.003 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.003 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.003 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:05.003 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.003 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.003 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.003 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.262 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:20:05.262 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:20:05.830 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.830 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:05.830 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.830 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.830 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.830 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.830 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.830 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:05.830 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:06.089 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:06.089 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.089 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:06.089 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:06.089 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:06.089 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.090 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.090 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.090 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.090 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.090 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.090 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.090 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.348 00:20:06.348 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.348 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.348 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.607 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.607 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.607 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.607 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.607 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.607 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.607 { 00:20:06.607 "auth": { 00:20:06.607 "dhgroup": "ffdhe6144", 00:20:06.607 "digest": "sha384", 00:20:06.607 "state": "completed" 00:20:06.607 }, 00:20:06.607 "cntlid": 81, 00:20:06.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:06.607 "listen_address": { 00:20:06.607 "adrfam": "IPv4", 00:20:06.607 "traddr": "10.0.0.3", 00:20:06.607 "trsvcid": "4420", 00:20:06.607 "trtype": "TCP" 00:20:06.607 }, 00:20:06.607 "peer_address": { 00:20:06.607 "adrfam": "IPv4", 00:20:06.607 "traddr": "10.0.0.1", 00:20:06.607 "trsvcid": "37650", 00:20:06.607 "trtype": "TCP" 00:20:06.607 }, 00:20:06.607 "qid": 0, 00:20:06.607 "state": "enabled", 00:20:06.607 "thread": "nvmf_tgt_poll_group_000" 00:20:06.607 } 00:20:06.607 ]' 00:20:06.607 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.865 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.865 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.865 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:06.865 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.865 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.865 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.865 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.124 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:20:07.124 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:20:07.690 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.690 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:07.690 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.690 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.690 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.690 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.690 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:07.690 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:07.948 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:07.948 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.948 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:07.948 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:07.948 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:07.948 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.948 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.948 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.948 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.207 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.207 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.207 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.207 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.466 00:20:08.466 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.466 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.466 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.725 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.725 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.725 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.725 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.725 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.725 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.725 { 00:20:08.725 "auth": { 00:20:08.725 "dhgroup": "ffdhe6144", 00:20:08.725 "digest": "sha384", 00:20:08.725 "state": "completed" 00:20:08.725 }, 00:20:08.725 "cntlid": 83, 00:20:08.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:08.725 "listen_address": { 00:20:08.725 "adrfam": "IPv4", 00:20:08.725 "traddr": "10.0.0.3", 00:20:08.725 "trsvcid": "4420", 00:20:08.725 "trtype": "TCP" 00:20:08.725 }, 00:20:08.725 "peer_address": { 00:20:08.725 "adrfam": "IPv4", 00:20:08.725 "traddr": "10.0.0.1", 00:20:08.725 "trsvcid": "37666", 00:20:08.725 "trtype": "TCP" 00:20:08.725 }, 00:20:08.725 "qid": 0, 00:20:08.725 "state": "enabled", 00:20:08.725 "thread": "nvmf_tgt_poll_group_000" 00:20:08.725 } 00:20:08.725 ]' 00:20:08.725 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.725 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.725 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.725 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:08.725 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.984 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.984 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.984 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.243 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:20:09.243 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:20:09.811 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.811 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:09.811 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.811 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.811 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.811 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.811 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:09.811 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:10.070 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:10.070 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.070 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:10.070 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:10.070 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:10.070 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.070 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.070 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.070 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.070 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.070 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.070 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.070 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.639 00:20:10.639 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.639 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.639 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.639 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.639 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.639 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.639 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.639 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.639 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.639 { 00:20:10.639 "auth": { 00:20:10.639 "dhgroup": "ffdhe6144", 00:20:10.639 "digest": "sha384", 00:20:10.639 "state": "completed" 00:20:10.639 }, 00:20:10.639 "cntlid": 85, 00:20:10.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:10.639 "listen_address": { 00:20:10.639 "adrfam": "IPv4", 00:20:10.639 "traddr": "10.0.0.3", 00:20:10.639 "trsvcid": "4420", 00:20:10.639 "trtype": "TCP" 00:20:10.639 }, 00:20:10.639 "peer_address": { 00:20:10.639 "adrfam": "IPv4", 00:20:10.639 "traddr": "10.0.0.1", 00:20:10.639 "trsvcid": "37694", 00:20:10.639 "trtype": "TCP" 00:20:10.639 }, 00:20:10.639 "qid": 0, 00:20:10.639 "state": "enabled", 00:20:10.639 "thread": "nvmf_tgt_poll_group_000" 00:20:10.639 } 00:20:10.639 ]' 00:20:10.639 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.898 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.898 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.898 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:10.898 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.898 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.898 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.898 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.156 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:20:11.157 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:20:11.724 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.724 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:11.724 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.724 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.724 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.724 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.724 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:11.724 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:11.983 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:11.983 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.983 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.983 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:11.983 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:11.983 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.983 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:20:11.983 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.983 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.983 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.983 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.983 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.983 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.551 00:20:12.551 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.551 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.551 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.810 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.810 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.810 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.810 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.810 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.810 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.810 { 00:20:12.810 "auth": { 00:20:12.810 "dhgroup": "ffdhe6144", 00:20:12.810 "digest": "sha384", 00:20:12.810 "state": "completed" 00:20:12.810 }, 00:20:12.810 "cntlid": 87, 00:20:12.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:12.810 "listen_address": { 00:20:12.810 "adrfam": "IPv4", 00:20:12.810 "traddr": "10.0.0.3", 00:20:12.810 "trsvcid": "4420", 00:20:12.810 "trtype": "TCP" 00:20:12.810 }, 00:20:12.810 "peer_address": { 00:20:12.810 "adrfam": "IPv4", 00:20:12.810 "traddr": "10.0.0.1", 00:20:12.810 "trsvcid": "37728", 00:20:12.810 "trtype": "TCP" 00:20:12.810 }, 00:20:12.810 "qid": 0, 00:20:12.810 "state": "enabled", 00:20:12.810 "thread": "nvmf_tgt_poll_group_000" 00:20:12.810 } 00:20:12.810 ]' 00:20:12.810 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.810 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.810 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.811 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:12.811 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.811 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.811 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.811 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.380 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:20:13.380 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:20:13.948 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.948 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:13.948 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.948 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.948 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.948 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.948 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.948 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:13.948 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:14.207 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:14.207 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.207 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.207 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:14.207 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:14.207 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.207 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.207 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.207 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.207 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.207 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.207 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.207 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.775 00:20:14.775 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.775 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.775 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.034 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.034 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.034 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.034 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.034 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.034 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.034 { 00:20:15.034 "auth": { 00:20:15.034 "dhgroup": "ffdhe8192", 00:20:15.034 "digest": "sha384", 00:20:15.034 "state": "completed" 00:20:15.034 }, 00:20:15.034 "cntlid": 89, 00:20:15.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:15.034 "listen_address": { 00:20:15.034 "adrfam": "IPv4", 00:20:15.034 "traddr": "10.0.0.3", 00:20:15.034 "trsvcid": "4420", 00:20:15.034 "trtype": "TCP" 00:20:15.034 }, 00:20:15.034 "peer_address": { 00:20:15.034 "adrfam": "IPv4", 00:20:15.034 "traddr": "10.0.0.1", 00:20:15.034 "trsvcid": "56062", 00:20:15.034 "trtype": "TCP" 00:20:15.034 }, 00:20:15.034 "qid": 0, 00:20:15.034 "state": "enabled", 00:20:15.034 "thread": "nvmf_tgt_poll_group_000" 00:20:15.034 } 00:20:15.034 ]' 00:20:15.034 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.034 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.034 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.034 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:15.034 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.034 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.034 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.034 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.602 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:20:15.602 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:20:16.170 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.170 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:16.170 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.170 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.170 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.170 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.170 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:16.170 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:16.428 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:16.428 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.428 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.428 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:16.428 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:16.428 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.428 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.428 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.428 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.428 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.428 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.428 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.428 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.013 00:20:17.013 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.013 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.013 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.271 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.271 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.271 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.271 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.271 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.271 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.271 { 00:20:17.271 "auth": { 00:20:17.271 "dhgroup": "ffdhe8192", 00:20:17.271 "digest": "sha384", 00:20:17.271 "state": "completed" 00:20:17.271 }, 00:20:17.271 "cntlid": 91, 00:20:17.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:17.271 "listen_address": { 00:20:17.271 "adrfam": "IPv4", 00:20:17.271 "traddr": "10.0.0.3", 00:20:17.271 "trsvcid": "4420", 00:20:17.271 "trtype": "TCP" 00:20:17.271 }, 00:20:17.271 "peer_address": { 00:20:17.271 "adrfam": "IPv4", 00:20:17.271 "traddr": "10.0.0.1", 00:20:17.271 "trsvcid": "56094", 00:20:17.271 "trtype": "TCP" 00:20:17.271 }, 00:20:17.271 "qid": 0, 00:20:17.271 "state": "enabled", 00:20:17.271 "thread": "nvmf_tgt_poll_group_000" 00:20:17.271 } 00:20:17.271 ]' 00:20:17.271 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.271 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.271 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.271 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:17.271 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.530 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.530 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.530 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.788 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:20:17.788 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:20:18.354 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.354 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:18.354 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.354 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.354 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.354 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.354 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:18.354 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:18.613 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:18.613 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.613 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:18.613 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:18.613 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:18.613 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.613 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.613 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.613 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.613 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.613 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.613 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.613 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.181 00:20:19.181 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.181 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.181 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.440 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.440 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.440 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.440 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.440 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.440 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.440 { 00:20:19.440 "auth": { 00:20:19.440 "dhgroup": "ffdhe8192", 00:20:19.440 "digest": "sha384", 00:20:19.440 "state": "completed" 00:20:19.440 }, 00:20:19.440 "cntlid": 93, 00:20:19.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:19.440 "listen_address": { 00:20:19.440 "adrfam": "IPv4", 00:20:19.440 "traddr": "10.0.0.3", 00:20:19.440 "trsvcid": "4420", 00:20:19.440 "trtype": "TCP" 00:20:19.440 }, 00:20:19.440 "peer_address": { 00:20:19.440 "adrfam": "IPv4", 00:20:19.440 "traddr": "10.0.0.1", 00:20:19.440 "trsvcid": "56140", 00:20:19.440 "trtype": "TCP" 00:20:19.440 }, 00:20:19.440 "qid": 0, 00:20:19.440 "state": "enabled", 00:20:19.440 "thread": "nvmf_tgt_poll_group_000" 00:20:19.440 } 00:20:19.440 ]' 00:20:19.440 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.440 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.440 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.699 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:19.699 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.699 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.699 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.699 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.959 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:20:19.959 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:20:20.526 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.526 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:20.526 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.526 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.526 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.526 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.526 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:20.526 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:20.785 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:20.785 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.785 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.785 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:20.785 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:20.785 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.785 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:20:20.785 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.785 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.785 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.785 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:20.785 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.785 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.353 00:20:21.612 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.612 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.612 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.612 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.612 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.612 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.612 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.612 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.612 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.612 { 00:20:21.612 "auth": { 00:20:21.612 "dhgroup": "ffdhe8192", 00:20:21.612 "digest": "sha384", 00:20:21.612 "state": "completed" 00:20:21.612 }, 00:20:21.612 "cntlid": 95, 00:20:21.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:21.612 "listen_address": { 00:20:21.612 "adrfam": "IPv4", 00:20:21.612 "traddr": "10.0.0.3", 00:20:21.612 "trsvcid": "4420", 00:20:21.612 "trtype": "TCP" 00:20:21.612 }, 00:20:21.612 "peer_address": { 00:20:21.612 "adrfam": "IPv4", 00:20:21.612 "traddr": "10.0.0.1", 00:20:21.612 "trsvcid": "56160", 00:20:21.612 "trtype": "TCP" 00:20:21.612 }, 00:20:21.612 "qid": 0, 00:20:21.612 "state": "enabled", 00:20:21.612 "thread": "nvmf_tgt_poll_group_000" 00:20:21.612 } 00:20:21.612 ]' 00:20:21.612 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.870 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.870 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.870 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:21.870 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.870 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.870 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.870 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.129 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:20:22.129 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:20:22.697 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.697 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:22.697 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.697 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.697 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.697 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:22.697 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.697 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.697 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:22.697 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:22.956 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:22.956 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.956 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:22.956 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:22.956 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:22.956 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.956 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.956 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.956 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.956 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.956 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.957 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.957 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.216 00:20:23.216 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.216 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.216 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.783 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.783 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.783 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.783 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.783 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.783 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.783 { 00:20:23.783 "auth": { 00:20:23.783 "dhgroup": "null", 00:20:23.783 "digest": "sha512", 00:20:23.783 "state": "completed" 00:20:23.783 }, 00:20:23.783 "cntlid": 97, 00:20:23.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:23.783 "listen_address": { 00:20:23.783 "adrfam": "IPv4", 00:20:23.783 "traddr": "10.0.0.3", 00:20:23.783 "trsvcid": "4420", 00:20:23.783 "trtype": "TCP" 00:20:23.783 }, 00:20:23.783 "peer_address": { 00:20:23.783 "adrfam": "IPv4", 00:20:23.783 "traddr": "10.0.0.1", 00:20:23.783 "trsvcid": "44762", 00:20:23.783 "trtype": "TCP" 00:20:23.783 }, 00:20:23.783 "qid": 0, 00:20:23.783 "state": "enabled", 00:20:23.783 "thread": "nvmf_tgt_poll_group_000" 00:20:23.783 } 00:20:23.783 ]' 00:20:23.783 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.783 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.783 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.783 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:23.783 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.783 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.783 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.783 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.043 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:20:24.043 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:20:24.610 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.610 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:24.610 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.610 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.610 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.610 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.610 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:24.610 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:24.869 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:24.869 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.869 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:24.869 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:24.869 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:24.869 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.869 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.869 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.869 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.869 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.869 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.869 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.869 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.128 00:20:25.128 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.128 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.128 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.390 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.390 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.390 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.390 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.390 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.390 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.390 { 00:20:25.390 "auth": { 00:20:25.390 "dhgroup": "null", 00:20:25.390 "digest": "sha512", 00:20:25.390 "state": "completed" 00:20:25.390 }, 00:20:25.390 "cntlid": 99, 00:20:25.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:25.390 "listen_address": { 00:20:25.390 "adrfam": "IPv4", 00:20:25.390 "traddr": "10.0.0.3", 00:20:25.390 "trsvcid": "4420", 00:20:25.390 "trtype": "TCP" 00:20:25.390 }, 00:20:25.390 "peer_address": { 00:20:25.390 "adrfam": "IPv4", 00:20:25.390 "traddr": "10.0.0.1", 00:20:25.390 "trsvcid": "44796", 00:20:25.390 "trtype": "TCP" 00:20:25.390 }, 00:20:25.390 "qid": 0, 00:20:25.390 "state": "enabled", 00:20:25.390 "thread": "nvmf_tgt_poll_group_000" 00:20:25.390 } 00:20:25.390 ]' 00:20:25.390 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.390 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.390 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.650 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:25.650 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.650 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.650 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.650 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.908 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:20:25.908 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:20:26.475 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.475 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:26.475 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.475 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.475 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.475 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.475 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:26.475 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:26.733 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:26.733 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.733 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:26.733 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:26.733 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:26.733 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.733 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.733 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.733 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.733 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.733 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.733 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.733 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.991 00:20:26.991 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.991 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.991 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.249 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.249 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.249 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.249 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.249 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.249 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.249 { 00:20:27.249 "auth": { 00:20:27.249 "dhgroup": "null", 00:20:27.249 "digest": "sha512", 00:20:27.249 "state": "completed" 00:20:27.249 }, 00:20:27.249 "cntlid": 101, 00:20:27.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:27.249 "listen_address": { 00:20:27.249 "adrfam": "IPv4", 00:20:27.249 "traddr": "10.0.0.3", 00:20:27.249 "trsvcid": "4420", 00:20:27.249 "trtype": "TCP" 00:20:27.249 }, 00:20:27.249 "peer_address": { 00:20:27.249 "adrfam": "IPv4", 00:20:27.249 "traddr": "10.0.0.1", 00:20:27.249 "trsvcid": "44820", 00:20:27.249 "trtype": "TCP" 00:20:27.249 }, 00:20:27.249 "qid": 0, 00:20:27.249 "state": "enabled", 00:20:27.249 "thread": "nvmf_tgt_poll_group_000" 00:20:27.249 } 00:20:27.249 ]' 00:20:27.249 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.249 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.249 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.507 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:27.507 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.507 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.507 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.507 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.765 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:20:27.765 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:20:28.329 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.329 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:28.329 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.329 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.329 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.329 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.329 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:28.329 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:28.587 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:28.587 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.587 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:28.587 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:28.587 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:28.587 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.587 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:20:28.587 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.587 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.587 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.587 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:28.587 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.587 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.845 00:20:28.845 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.845 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.845 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.102 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.102 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.102 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.102 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.102 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.102 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.102 { 00:20:29.102 "auth": { 00:20:29.102 "dhgroup": "null", 00:20:29.102 "digest": "sha512", 00:20:29.102 "state": "completed" 00:20:29.102 }, 00:20:29.102 "cntlid": 103, 00:20:29.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:29.102 "listen_address": { 00:20:29.102 "adrfam": "IPv4", 00:20:29.102 "traddr": "10.0.0.3", 00:20:29.102 "trsvcid": "4420", 00:20:29.102 "trtype": "TCP" 00:20:29.102 }, 00:20:29.102 "peer_address": { 00:20:29.102 "adrfam": "IPv4", 00:20:29.102 "traddr": "10.0.0.1", 00:20:29.102 "trsvcid": "44834", 00:20:29.102 "trtype": "TCP" 00:20:29.102 }, 00:20:29.102 "qid": 0, 00:20:29.102 "state": "enabled", 00:20:29.102 "thread": "nvmf_tgt_poll_group_000" 00:20:29.102 } 00:20:29.102 ]' 00:20:29.102 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.359 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:29.359 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.359 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:29.359 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.359 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.359 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.359 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.617 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:20:29.617 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:20:30.205 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.205 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:30.205 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.205 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.205 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.205 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.205 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.205 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:30.205 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:30.481 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:30.481 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.481 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:30.481 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:30.481 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:30.481 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.481 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.481 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.481 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.481 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.481 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.481 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.481 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.749 00:20:30.749 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.749 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.749 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.007 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.007 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.008 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.008 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.008 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.008 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.008 { 00:20:31.008 "auth": { 00:20:31.008 "dhgroup": "ffdhe2048", 00:20:31.008 "digest": "sha512", 00:20:31.008 "state": "completed" 00:20:31.008 }, 00:20:31.008 "cntlid": 105, 00:20:31.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:31.008 "listen_address": { 00:20:31.008 "adrfam": "IPv4", 00:20:31.008 "traddr": "10.0.0.3", 00:20:31.008 "trsvcid": "4420", 00:20:31.008 "trtype": "TCP" 00:20:31.008 }, 00:20:31.008 "peer_address": { 00:20:31.008 "adrfam": "IPv4", 00:20:31.008 "traddr": "10.0.0.1", 00:20:31.008 "trsvcid": "44864", 00:20:31.008 "trtype": "TCP" 00:20:31.008 }, 00:20:31.008 "qid": 0, 00:20:31.008 "state": "enabled", 00:20:31.008 "thread": "nvmf_tgt_poll_group_000" 00:20:31.008 } 00:20:31.008 ]' 00:20:31.008 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.008 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.008 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.267 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:31.267 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.267 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.267 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.267 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.526 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:20:31.526 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:20:32.094 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.094 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:32.094 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.094 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.094 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.094 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.094 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:32.094 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:32.354 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:32.354 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.354 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:32.355 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:32.355 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:32.355 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.355 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.355 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.355 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.355 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.355 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.355 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.355 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.923 00:20:32.923 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.923 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.923 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.923 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.923 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.923 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.923 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.923 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.923 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.923 { 00:20:32.923 "auth": { 00:20:32.923 "dhgroup": "ffdhe2048", 00:20:32.923 "digest": "sha512", 00:20:32.923 "state": "completed" 00:20:32.923 }, 00:20:32.923 "cntlid": 107, 00:20:32.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:32.923 "listen_address": { 00:20:32.923 "adrfam": "IPv4", 00:20:32.923 "traddr": "10.0.0.3", 00:20:32.923 "trsvcid": "4420", 00:20:32.923 "trtype": "TCP" 00:20:32.923 }, 00:20:32.923 "peer_address": { 00:20:32.923 "adrfam": "IPv4", 00:20:32.923 "traddr": "10.0.0.1", 00:20:32.923 "trsvcid": "48496", 00:20:32.923 "trtype": "TCP" 00:20:32.923 }, 00:20:32.923 "qid": 0, 00:20:32.923 "state": "enabled", 00:20:32.923 "thread": "nvmf_tgt_poll_group_000" 00:20:32.923 } 00:20:32.923 ]' 00:20:32.923 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.923 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.923 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.182 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:33.182 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.182 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.182 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.182 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.441 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:20:33.441 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:20:34.008 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.008 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:34.008 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.008 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.008 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.008 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.008 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:34.008 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:34.266 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:34.266 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.266 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:34.266 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:34.266 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:34.266 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.266 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.267 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.267 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.267 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.267 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.267 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.267 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.525 00:20:34.783 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.783 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.783 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.783 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.783 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.783 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.783 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.784 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.784 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.784 { 00:20:34.784 "auth": { 00:20:34.784 "dhgroup": "ffdhe2048", 00:20:34.784 "digest": "sha512", 00:20:34.784 "state": "completed" 00:20:34.784 }, 00:20:34.784 "cntlid": 109, 00:20:34.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:34.784 "listen_address": { 00:20:34.784 "adrfam": "IPv4", 00:20:34.784 "traddr": "10.0.0.3", 00:20:34.784 "trsvcid": "4420", 00:20:34.784 "trtype": "TCP" 00:20:34.784 }, 00:20:34.784 "peer_address": { 00:20:34.784 "adrfam": "IPv4", 00:20:34.784 "traddr": "10.0.0.1", 00:20:34.784 "trsvcid": "48526", 00:20:34.784 "trtype": "TCP" 00:20:34.784 }, 00:20:34.784 "qid": 0, 00:20:34.784 "state": "enabled", 00:20:34.784 "thread": "nvmf_tgt_poll_group_000" 00:20:34.784 } 00:20:34.784 ]' 00:20:34.784 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.042 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.042 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.042 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:35.042 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.042 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.042 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.042 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.301 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:20:35.301 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:20:35.868 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.868 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:35.868 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.868 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.868 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.868 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.869 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:35.869 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:36.127 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:36.127 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.127 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:36.127 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:36.127 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:36.127 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.127 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:20:36.127 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.127 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.127 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.127 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:36.127 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.127 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.695 00:20:36.695 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.695 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.695 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.954 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.954 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.954 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.954 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.954 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.954 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.954 { 00:20:36.954 "auth": { 00:20:36.954 "dhgroup": "ffdhe2048", 00:20:36.954 "digest": "sha512", 00:20:36.954 "state": "completed" 00:20:36.954 }, 00:20:36.954 "cntlid": 111, 00:20:36.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:36.954 "listen_address": { 00:20:36.954 "adrfam": "IPv4", 00:20:36.954 "traddr": "10.0.0.3", 00:20:36.954 "trsvcid": "4420", 00:20:36.954 "trtype": "TCP" 00:20:36.954 }, 00:20:36.954 "peer_address": { 00:20:36.954 "adrfam": "IPv4", 00:20:36.954 "traddr": "10.0.0.1", 00:20:36.954 "trsvcid": "48556", 00:20:36.954 "trtype": "TCP" 00:20:36.954 }, 00:20:36.954 "qid": 0, 00:20:36.954 "state": "enabled", 00:20:36.954 "thread": "nvmf_tgt_poll_group_000" 00:20:36.954 } 00:20:36.954 ]' 00:20:36.954 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.954 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.954 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.954 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:36.954 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.954 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.954 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.954 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.524 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:20:37.524 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:20:38.090 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.090 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:38.090 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.090 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.090 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.090 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.090 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.090 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:38.090 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:38.347 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:38.347 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.347 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:38.347 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:38.347 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:38.347 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.347 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.347 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.347 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.347 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.347 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.347 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.347 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.604 00:20:38.604 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.604 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.604 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.861 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.861 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.861 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.861 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.119 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.119 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.119 { 00:20:39.119 "auth": { 00:20:39.119 "dhgroup": "ffdhe3072", 00:20:39.119 "digest": "sha512", 00:20:39.119 "state": "completed" 00:20:39.119 }, 00:20:39.119 "cntlid": 113, 00:20:39.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:39.119 "listen_address": { 00:20:39.119 "adrfam": "IPv4", 00:20:39.119 "traddr": "10.0.0.3", 00:20:39.119 "trsvcid": "4420", 00:20:39.119 "trtype": "TCP" 00:20:39.119 }, 00:20:39.119 "peer_address": { 00:20:39.119 "adrfam": "IPv4", 00:20:39.119 "traddr": "10.0.0.1", 00:20:39.119 "trsvcid": "48586", 00:20:39.119 "trtype": "TCP" 00:20:39.119 }, 00:20:39.119 "qid": 0, 00:20:39.119 "state": "enabled", 00:20:39.119 "thread": "nvmf_tgt_poll_group_000" 00:20:39.119 } 00:20:39.119 ]' 00:20:39.119 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.119 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.119 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.119 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:39.119 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.120 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.120 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.120 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.377 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:20:39.377 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:20:39.942 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.942 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:39.942 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.942 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.942 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.942 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.942 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:39.942 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:40.200 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:40.200 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.200 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:40.200 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:40.200 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:40.200 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.200 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.200 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.200 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.200 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.200 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.200 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.200 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.767 00:20:40.767 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.767 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.767 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.026 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.026 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.026 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.026 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.026 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.026 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.026 { 00:20:41.026 "auth": { 00:20:41.026 "dhgroup": "ffdhe3072", 00:20:41.026 "digest": "sha512", 00:20:41.026 "state": "completed" 00:20:41.026 }, 00:20:41.026 "cntlid": 115, 00:20:41.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:41.026 "listen_address": { 00:20:41.026 "adrfam": "IPv4", 00:20:41.026 "traddr": "10.0.0.3", 00:20:41.026 "trsvcid": "4420", 00:20:41.026 "trtype": "TCP" 00:20:41.026 }, 00:20:41.026 "peer_address": { 00:20:41.026 "adrfam": "IPv4", 00:20:41.026 "traddr": "10.0.0.1", 00:20:41.026 "trsvcid": "48618", 00:20:41.026 "trtype": "TCP" 00:20:41.026 }, 00:20:41.026 "qid": 0, 00:20:41.026 "state": "enabled", 00:20:41.026 "thread": "nvmf_tgt_poll_group_000" 00:20:41.026 } 00:20:41.026 ]' 00:20:41.026 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.026 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.026 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.026 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:41.026 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.026 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.026 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.026 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.594 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:20:41.594 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:20:42.162 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.162 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:42.162 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.162 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.162 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.162 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.162 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:42.162 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:42.422 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:42.422 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.422 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:42.422 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:42.422 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:42.422 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.422 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.422 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.422 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.422 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.422 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.422 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.422 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.681 00:20:42.681 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.681 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.681 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.940 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.940 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.940 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.940 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.940 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.940 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.940 { 00:20:42.940 "auth": { 00:20:42.940 "dhgroup": "ffdhe3072", 00:20:42.940 "digest": "sha512", 00:20:42.940 "state": "completed" 00:20:42.940 }, 00:20:42.940 "cntlid": 117, 00:20:42.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:42.940 "listen_address": { 00:20:42.940 "adrfam": "IPv4", 00:20:42.940 "traddr": "10.0.0.3", 00:20:42.940 "trsvcid": "4420", 00:20:42.940 "trtype": "TCP" 00:20:42.940 }, 00:20:42.940 "peer_address": { 00:20:42.940 "adrfam": "IPv4", 00:20:42.940 "traddr": "10.0.0.1", 00:20:42.940 "trsvcid": "46752", 00:20:42.940 "trtype": "TCP" 00:20:42.940 }, 00:20:42.940 "qid": 0, 00:20:42.940 "state": "enabled", 00:20:42.940 "thread": "nvmf_tgt_poll_group_000" 00:20:42.940 } 00:20:42.940 ]' 00:20:42.940 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.199 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.199 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.199 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:43.199 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.199 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.199 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.199 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.459 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:20:43.459 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:20:44.027 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.027 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:44.027 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.027 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.027 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.027 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.027 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.027 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.595 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:44.595 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.595 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:44.595 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:44.595 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:44.595 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.595 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:20:44.595 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.595 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.595 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.595 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:44.595 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.595 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.854 00:20:44.855 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.855 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.855 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.113 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.113 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.113 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.113 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.113 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.113 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.113 { 00:20:45.113 "auth": { 00:20:45.113 "dhgroup": "ffdhe3072", 00:20:45.113 "digest": "sha512", 00:20:45.113 "state": "completed" 00:20:45.113 }, 00:20:45.113 "cntlid": 119, 00:20:45.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:45.113 "listen_address": { 00:20:45.113 "adrfam": "IPv4", 00:20:45.113 "traddr": "10.0.0.3", 00:20:45.113 "trsvcid": "4420", 00:20:45.113 "trtype": "TCP" 00:20:45.113 }, 00:20:45.113 "peer_address": { 00:20:45.113 "adrfam": "IPv4", 00:20:45.113 "traddr": "10.0.0.1", 00:20:45.113 "trsvcid": "46784", 00:20:45.113 "trtype": "TCP" 00:20:45.113 }, 00:20:45.113 "qid": 0, 00:20:45.113 "state": "enabled", 00:20:45.113 "thread": "nvmf_tgt_poll_group_000" 00:20:45.113 } 00:20:45.113 ]' 00:20:45.113 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.113 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.113 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.372 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:45.372 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.372 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.372 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.372 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.631 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:20:45.631 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:20:46.199 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.199 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:46.199 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.199 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.199 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.199 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.199 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.199 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:46.199 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:46.458 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:46.458 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.458 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:46.458 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:46.458 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:46.458 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.458 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.458 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.458 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.458 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.458 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.458 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.458 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.026 00:20:47.026 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.026 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.026 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.285 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.285 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.285 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.285 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.285 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.285 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.285 { 00:20:47.285 "auth": { 00:20:47.285 "dhgroup": "ffdhe4096", 00:20:47.285 "digest": "sha512", 00:20:47.285 "state": "completed" 00:20:47.285 }, 00:20:47.285 "cntlid": 121, 00:20:47.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:47.285 "listen_address": { 00:20:47.285 "adrfam": "IPv4", 00:20:47.285 "traddr": "10.0.0.3", 00:20:47.285 "trsvcid": "4420", 00:20:47.285 "trtype": "TCP" 00:20:47.285 }, 00:20:47.285 "peer_address": { 00:20:47.285 "adrfam": "IPv4", 00:20:47.285 "traddr": "10.0.0.1", 00:20:47.285 "trsvcid": "46822", 00:20:47.285 "trtype": "TCP" 00:20:47.285 }, 00:20:47.285 "qid": 0, 00:20:47.285 "state": "enabled", 00:20:47.285 "thread": "nvmf_tgt_poll_group_000" 00:20:47.285 } 00:20:47.285 ]' 00:20:47.285 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.285 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.285 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.285 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:47.285 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.285 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.285 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.285 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.544 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:20:47.544 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:20:48.476 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.476 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:48.476 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.476 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.476 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.476 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.476 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:48.476 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:48.734 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:48.734 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.734 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:48.734 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:48.734 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:48.734 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.734 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.734 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.734 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.734 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.734 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.734 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.734 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.993 00:20:48.993 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.993 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.993 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.252 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.252 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.252 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.252 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.252 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.252 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.252 { 00:20:49.252 "auth": { 00:20:49.252 "dhgroup": "ffdhe4096", 00:20:49.252 "digest": "sha512", 00:20:49.252 "state": "completed" 00:20:49.252 }, 00:20:49.252 "cntlid": 123, 00:20:49.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:49.252 "listen_address": { 00:20:49.252 "adrfam": "IPv4", 00:20:49.252 "traddr": "10.0.0.3", 00:20:49.252 "trsvcid": "4420", 00:20:49.252 "trtype": "TCP" 00:20:49.252 }, 00:20:49.252 "peer_address": { 00:20:49.252 "adrfam": "IPv4", 00:20:49.252 "traddr": "10.0.0.1", 00:20:49.252 "trsvcid": "46858", 00:20:49.252 "trtype": "TCP" 00:20:49.252 }, 00:20:49.252 "qid": 0, 00:20:49.252 "state": "enabled", 00:20:49.252 "thread": "nvmf_tgt_poll_group_000" 00:20:49.252 } 00:20:49.252 ]' 00:20:49.252 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.511 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.511 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.511 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:49.511 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.511 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.511 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.511 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.770 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:20:49.770 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:20:50.337 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.337 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:50.337 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.337 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.337 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.337 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.337 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:50.337 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:50.596 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:20:50.597 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.597 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:50.597 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:50.597 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:50.597 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.597 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.597 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.597 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.597 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.597 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.597 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.597 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.164 00:20:51.164 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.164 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.164 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.423 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.423 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.423 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.423 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.423 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.423 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.423 { 00:20:51.423 "auth": { 00:20:51.423 "dhgroup": "ffdhe4096", 00:20:51.423 "digest": "sha512", 00:20:51.423 "state": "completed" 00:20:51.423 }, 00:20:51.423 "cntlid": 125, 00:20:51.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:51.423 "listen_address": { 00:20:51.423 "adrfam": "IPv4", 00:20:51.423 "traddr": "10.0.0.3", 00:20:51.423 "trsvcid": "4420", 00:20:51.423 "trtype": "TCP" 00:20:51.423 }, 00:20:51.423 "peer_address": { 00:20:51.423 "adrfam": "IPv4", 00:20:51.423 "traddr": "10.0.0.1", 00:20:51.423 "trsvcid": "46892", 00:20:51.423 "trtype": "TCP" 00:20:51.423 }, 00:20:51.423 "qid": 0, 00:20:51.423 "state": "enabled", 00:20:51.423 "thread": "nvmf_tgt_poll_group_000" 00:20:51.423 } 00:20:51.423 ]' 00:20:51.423 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.423 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.423 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.423 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:51.423 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.423 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.423 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.423 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.682 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:20:51.682 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:20:52.618 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.618 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:52.618 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.618 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.618 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.618 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.618 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:52.618 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:52.618 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:20:52.618 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.618 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:52.618 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:52.618 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:52.618 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.618 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:20:52.618 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.618 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.618 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.618 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:52.618 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.877 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.136 00:20:53.136 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.136 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.136 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.395 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.395 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.395 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.395 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.395 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.395 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.395 { 00:20:53.395 "auth": { 00:20:53.395 "dhgroup": "ffdhe4096", 00:20:53.395 "digest": "sha512", 00:20:53.395 "state": "completed" 00:20:53.395 }, 00:20:53.395 "cntlid": 127, 00:20:53.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:53.395 "listen_address": { 00:20:53.395 "adrfam": "IPv4", 00:20:53.395 "traddr": "10.0.0.3", 00:20:53.395 "trsvcid": "4420", 00:20:53.395 "trtype": "TCP" 00:20:53.395 }, 00:20:53.395 "peer_address": { 00:20:53.395 "adrfam": "IPv4", 00:20:53.395 "traddr": "10.0.0.1", 00:20:53.395 "trsvcid": "40494", 00:20:53.395 "trtype": "TCP" 00:20:53.395 }, 00:20:53.395 "qid": 0, 00:20:53.395 "state": "enabled", 00:20:53.395 "thread": "nvmf_tgt_poll_group_000" 00:20:53.395 } 00:20:53.395 ]' 00:20:53.395 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.395 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.395 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.395 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:53.395 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.653 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.653 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.653 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.653 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:20:53.653 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:20:54.586 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.586 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:54.586 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.586 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.586 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.586 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.586 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.586 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:54.586 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:54.586 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:20:54.586 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.586 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:54.586 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:54.586 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:54.586 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.586 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.586 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.586 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.586 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.586 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.586 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.586 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.153 00:20:55.153 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.153 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.153 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.411 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.411 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.411 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.411 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.411 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.411 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.411 { 00:20:55.411 "auth": { 00:20:55.411 "dhgroup": "ffdhe6144", 00:20:55.411 "digest": "sha512", 00:20:55.411 "state": "completed" 00:20:55.411 }, 00:20:55.411 "cntlid": 129, 00:20:55.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:55.411 "listen_address": { 00:20:55.411 "adrfam": "IPv4", 00:20:55.411 "traddr": "10.0.0.3", 00:20:55.411 "trsvcid": "4420", 00:20:55.411 "trtype": "TCP" 00:20:55.411 }, 00:20:55.411 "peer_address": { 00:20:55.411 "adrfam": "IPv4", 00:20:55.411 "traddr": "10.0.0.1", 00:20:55.411 "trsvcid": "40534", 00:20:55.411 "trtype": "TCP" 00:20:55.411 }, 00:20:55.411 "qid": 0, 00:20:55.411 "state": "enabled", 00:20:55.411 "thread": "nvmf_tgt_poll_group_000" 00:20:55.411 } 00:20:55.411 ]' 00:20:55.411 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.411 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.411 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.669 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:55.669 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.669 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.669 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.669 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.934 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:20:55.934 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:20:56.545 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.545 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:56.545 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.545 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.545 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.545 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.545 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:56.545 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:56.819 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:20:56.819 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.819 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:56.819 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:56.819 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:56.819 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.819 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.819 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.819 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.819 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.819 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.819 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.819 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.385 00:20:57.385 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.385 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.385 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.644 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.644 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.644 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.644 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.644 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.644 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.644 { 00:20:57.644 "auth": { 00:20:57.644 "dhgroup": "ffdhe6144", 00:20:57.644 "digest": "sha512", 00:20:57.644 "state": "completed" 00:20:57.644 }, 00:20:57.644 "cntlid": 131, 00:20:57.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:57.644 "listen_address": { 00:20:57.644 "adrfam": "IPv4", 00:20:57.644 "traddr": "10.0.0.3", 00:20:57.644 "trsvcid": "4420", 00:20:57.644 "trtype": "TCP" 00:20:57.644 }, 00:20:57.644 "peer_address": { 00:20:57.644 "adrfam": "IPv4", 00:20:57.644 "traddr": "10.0.0.1", 00:20:57.644 "trsvcid": "40568", 00:20:57.644 "trtype": "TCP" 00:20:57.644 }, 00:20:57.644 "qid": 0, 00:20:57.644 "state": "enabled", 00:20:57.644 "thread": "nvmf_tgt_poll_group_000" 00:20:57.644 } 00:20:57.644 ]' 00:20:57.644 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.644 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.644 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.644 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:57.644 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.644 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.644 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.644 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.207 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:20:58.207 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:20:58.771 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.771 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:20:58.771 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.771 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.771 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.771 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.771 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:58.771 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:59.028 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:20:59.028 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.028 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:59.028 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:59.028 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:59.028 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.028 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.028 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.028 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.028 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.028 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.028 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.028 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.285 00:20:59.285 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.285 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.285 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.542 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.542 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.542 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.542 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.542 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.542 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.542 { 00:20:59.542 "auth": { 00:20:59.542 "dhgroup": "ffdhe6144", 00:20:59.542 "digest": "sha512", 00:20:59.542 "state": "completed" 00:20:59.542 }, 00:20:59.542 "cntlid": 133, 00:20:59.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:20:59.542 "listen_address": { 00:20:59.542 "adrfam": "IPv4", 00:20:59.542 "traddr": "10.0.0.3", 00:20:59.542 "trsvcid": "4420", 00:20:59.542 "trtype": "TCP" 00:20:59.542 }, 00:20:59.542 "peer_address": { 00:20:59.542 "adrfam": "IPv4", 00:20:59.542 "traddr": "10.0.0.1", 00:20:59.542 "trsvcid": "40606", 00:20:59.542 "trtype": "TCP" 00:20:59.542 }, 00:20:59.542 "qid": 0, 00:20:59.542 "state": "enabled", 00:20:59.542 "thread": "nvmf_tgt_poll_group_000" 00:20:59.542 } 00:20:59.542 ]' 00:20:59.542 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.800 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.800 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.800 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:59.800 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.800 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.800 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.800 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.057 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:21:00.058 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:21:00.625 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.625 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:21:00.625 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.625 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.625 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.625 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.625 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:00.625 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:00.884 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:00.884 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.884 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:00.884 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:00.884 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:00.884 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.884 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:21:00.884 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.884 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.884 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.884 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:00.884 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.884 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:01.452 00:21:01.452 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.452 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.452 09:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.712 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.712 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.712 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.712 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.712 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.712 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.712 { 00:21:01.712 "auth": { 00:21:01.712 "dhgroup": "ffdhe6144", 00:21:01.712 "digest": "sha512", 00:21:01.712 "state": "completed" 00:21:01.712 }, 00:21:01.712 "cntlid": 135, 00:21:01.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:21:01.712 "listen_address": { 00:21:01.712 "adrfam": "IPv4", 00:21:01.712 "traddr": "10.0.0.3", 00:21:01.712 "trsvcid": "4420", 00:21:01.712 "trtype": "TCP" 00:21:01.712 }, 00:21:01.712 "peer_address": { 00:21:01.712 "adrfam": "IPv4", 00:21:01.712 "traddr": "10.0.0.1", 00:21:01.712 "trsvcid": "40644", 00:21:01.712 "trtype": "TCP" 00:21:01.712 }, 00:21:01.712 "qid": 0, 00:21:01.712 "state": "enabled", 00:21:01.712 "thread": "nvmf_tgt_poll_group_000" 00:21:01.712 } 00:21:01.712 ]' 00:21:01.712 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.712 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.712 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.712 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:01.712 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.971 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.971 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.971 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.230 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:21:02.230 09:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:21:02.798 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.798 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:21:02.798 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.798 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.798 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.798 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.798 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.798 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:02.798 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:03.058 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:03.058 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.058 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:03.058 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:03.058 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:03.058 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.058 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.058 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.058 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.058 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.058 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.058 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.058 09:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.626 00:21:03.626 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.626 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.626 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.885 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.885 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.885 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.885 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.885 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.885 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.885 { 00:21:03.885 "auth": { 00:21:03.885 "dhgroup": "ffdhe8192", 00:21:03.885 "digest": "sha512", 00:21:03.885 "state": "completed" 00:21:03.885 }, 00:21:03.885 "cntlid": 137, 00:21:03.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:21:03.885 "listen_address": { 00:21:03.885 "adrfam": "IPv4", 00:21:03.885 "traddr": "10.0.0.3", 00:21:03.885 "trsvcid": "4420", 00:21:03.885 "trtype": "TCP" 00:21:03.885 }, 00:21:03.885 "peer_address": { 00:21:03.885 "adrfam": "IPv4", 00:21:03.885 "traddr": "10.0.0.1", 00:21:03.885 "trsvcid": "56606", 00:21:03.885 "trtype": "TCP" 00:21:03.885 }, 00:21:03.885 "qid": 0, 00:21:03.885 "state": "enabled", 00:21:03.885 "thread": "nvmf_tgt_poll_group_000" 00:21:03.885 } 00:21:03.885 ]' 00:21:03.885 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.144 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.144 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.144 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:04.144 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.144 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.144 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.144 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.403 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:21:04.403 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:21:04.971 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.971 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:21:04.971 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.971 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.971 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.971 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.971 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:04.971 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:05.230 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:05.230 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.230 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.230 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:05.230 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:05.230 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.230 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.230 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.230 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.230 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.230 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.230 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.230 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.798 00:21:06.056 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.056 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.056 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.315 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.315 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.315 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.315 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.315 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.315 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.315 { 00:21:06.315 "auth": { 00:21:06.315 "dhgroup": "ffdhe8192", 00:21:06.315 "digest": "sha512", 00:21:06.315 "state": "completed" 00:21:06.315 }, 00:21:06.315 "cntlid": 139, 00:21:06.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:21:06.315 "listen_address": { 00:21:06.315 "adrfam": "IPv4", 00:21:06.315 "traddr": "10.0.0.3", 00:21:06.315 "trsvcid": "4420", 00:21:06.315 "trtype": "TCP" 00:21:06.315 }, 00:21:06.315 "peer_address": { 00:21:06.315 "adrfam": "IPv4", 00:21:06.315 "traddr": "10.0.0.1", 00:21:06.315 "trsvcid": "56620", 00:21:06.315 "trtype": "TCP" 00:21:06.315 }, 00:21:06.315 "qid": 0, 00:21:06.315 "state": "enabled", 00:21:06.315 "thread": "nvmf_tgt_poll_group_000" 00:21:06.315 } 00:21:06.315 ]' 00:21:06.315 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.315 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.316 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.316 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:06.316 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.316 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.316 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.316 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.574 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:21:06.574 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: --dhchap-ctrl-secret DHHC-1:02:MWEwNThjOTJkZTQxZjQ1NWMzMzVlNDNkODAyY2RmYTllM2FkYjg4NWVmY2E5M2Vi36Aqfw==: 00:21:07.141 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.141 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:21:07.141 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.141 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.141 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.141 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.141 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:07.141 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:07.400 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:07.400 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.400 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.400 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:07.400 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:07.400 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.400 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.400 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.400 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.400 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.400 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.400 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.400 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.985 00:21:07.985 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.985 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.985 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.243 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.243 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.243 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.243 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.243 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.243 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.243 { 00:21:08.243 "auth": { 00:21:08.243 "dhgroup": "ffdhe8192", 00:21:08.243 "digest": "sha512", 00:21:08.243 "state": "completed" 00:21:08.243 }, 00:21:08.243 "cntlid": 141, 00:21:08.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:21:08.243 "listen_address": { 00:21:08.243 "adrfam": "IPv4", 00:21:08.243 "traddr": "10.0.0.3", 00:21:08.243 "trsvcid": "4420", 00:21:08.243 "trtype": "TCP" 00:21:08.243 }, 00:21:08.243 "peer_address": { 00:21:08.243 "adrfam": "IPv4", 00:21:08.243 "traddr": "10.0.0.1", 00:21:08.243 "trsvcid": "56652", 00:21:08.243 "trtype": "TCP" 00:21:08.243 }, 00:21:08.243 "qid": 0, 00:21:08.243 "state": "enabled", 00:21:08.243 "thread": "nvmf_tgt_poll_group_000" 00:21:08.243 } 00:21:08.243 ]' 00:21:08.243 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.243 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.243 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.502 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:08.502 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.502 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.502 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.502 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.761 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:21:08.761 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:01:MDRkOWMwMDY4ZmU3ZGM4OTkyZWJmMjljMjIwYmQwNTBgqq+X: 00:21:09.328 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.328 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:21:09.328 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.329 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.329 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.329 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.329 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:09.329 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:09.329 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:09.329 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.329 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.329 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:09.329 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:09.329 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.329 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:21:09.329 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.329 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.329 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.329 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:09.329 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.329 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.896 00:21:09.896 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.896 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.896 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.462 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.462 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.462 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.462 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.463 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.463 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.463 { 00:21:10.463 "auth": { 00:21:10.463 "dhgroup": "ffdhe8192", 00:21:10.463 "digest": "sha512", 00:21:10.463 "state": "completed" 00:21:10.463 }, 00:21:10.463 "cntlid": 143, 00:21:10.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:21:10.463 "listen_address": { 00:21:10.463 "adrfam": "IPv4", 00:21:10.463 "traddr": "10.0.0.3", 00:21:10.463 "trsvcid": "4420", 00:21:10.463 "trtype": "TCP" 00:21:10.463 }, 00:21:10.463 "peer_address": { 00:21:10.463 "adrfam": "IPv4", 00:21:10.463 "traddr": "10.0.0.1", 00:21:10.463 "trsvcid": "56672", 00:21:10.463 "trtype": "TCP" 00:21:10.463 }, 00:21:10.463 "qid": 0, 00:21:10.463 "state": "enabled", 00:21:10.463 "thread": "nvmf_tgt_poll_group_000" 00:21:10.463 } 00:21:10.463 ]' 00:21:10.463 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.463 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.463 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.463 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:10.463 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.463 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.463 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.463 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.721 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:21:10.721 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:21:11.287 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.287 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:21:11.287 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.287 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.287 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.287 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:11.287 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:11.287 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:11.287 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:11.287 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:11.287 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:11.546 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:11.546 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.546 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.546 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:11.546 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:11.546 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.546 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.546 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.546 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.546 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.546 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.546 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.546 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.113 00:21:12.370 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.370 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.370 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.628 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.628 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.628 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.628 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.628 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.628 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.628 { 00:21:12.628 "auth": { 00:21:12.628 "dhgroup": "ffdhe8192", 00:21:12.628 "digest": "sha512", 00:21:12.628 "state": "completed" 00:21:12.628 }, 00:21:12.628 "cntlid": 145, 00:21:12.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:21:12.628 "listen_address": { 00:21:12.628 "adrfam": "IPv4", 00:21:12.628 "traddr": "10.0.0.3", 00:21:12.629 "trsvcid": "4420", 00:21:12.629 "trtype": "TCP" 00:21:12.629 }, 00:21:12.629 "peer_address": { 00:21:12.629 "adrfam": "IPv4", 00:21:12.629 "traddr": "10.0.0.1", 00:21:12.629 "trsvcid": "56692", 00:21:12.629 "trtype": "TCP" 00:21:12.629 }, 00:21:12.629 "qid": 0, 00:21:12.629 "state": "enabled", 00:21:12.629 "thread": "nvmf_tgt_poll_group_000" 00:21:12.629 } 00:21:12.629 ]' 00:21:12.629 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.629 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.629 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.629 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:12.629 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.629 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.629 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.629 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.194 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:21:13.194 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:00:ZTU3MTAyZmFhN2EwOGQwZDJiOGVmODU5N2Y3MDY3YzNkODBhNjEwZjgyNjgyZmRhHq2DYA==: --dhchap-ctrl-secret DHHC-1:03:ZDA3YTQyOTcwODViNmVhMzMzZTUzMjMzOWVjYzhlOGE5NjI2NDBkZWU5MGQ4MjIzMDZiNmRkMGNkMmFmYTI5Mnc3vmo=: 00:21:13.762 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.762 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:21:13.762 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.762 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.762 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.762 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 00:21:13.762 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.762 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.762 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.762 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:13.762 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:13.762 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:13.762 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:13.762 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.762 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:13.762 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.762 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:13.762 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:13.762 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:14.021 2024/12/05 09:53:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:14.021 request: 00:21:14.021 { 00:21:14.021 "method": "bdev_nvme_attach_controller", 00:21:14.021 "params": { 00:21:14.021 "name": "nvme0", 00:21:14.021 "trtype": "tcp", 00:21:14.021 "traddr": "10.0.0.3", 00:21:14.021 "adrfam": "ipv4", 00:21:14.021 "trsvcid": "4420", 00:21:14.021 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:14.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:21:14.021 "prchk_reftag": false, 00:21:14.021 "prchk_guard": false, 00:21:14.021 "hdgst": false, 00:21:14.021 "ddgst": false, 00:21:14.021 "dhchap_key": "key2", 00:21:14.021 "allow_unrecognized_csi": false 00:21:14.021 } 00:21:14.021 } 00:21:14.021 Got JSON-RPC error response 00:21:14.021 GoRPCClient: error on JSON-RPC call 00:21:14.280 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:14.280 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:14.280 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:14.280 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:14.280 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:21:14.280 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.280 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.280 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.280 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.280 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.280 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.280 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.280 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.280 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:14.280 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.280 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:14.281 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.281 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:14.281 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.281 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.281 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.281 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.848 2024/12/05 09:53:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:14.848 request: 00:21:14.848 { 00:21:14.848 "method": "bdev_nvme_attach_controller", 00:21:14.848 "params": { 00:21:14.848 "name": "nvme0", 00:21:14.848 "trtype": "tcp", 00:21:14.848 "traddr": "10.0.0.3", 00:21:14.848 "adrfam": "ipv4", 00:21:14.848 "trsvcid": "4420", 00:21:14.848 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:14.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:21:14.848 "prchk_reftag": false, 00:21:14.848 "prchk_guard": false, 00:21:14.848 "hdgst": false, 00:21:14.848 "ddgst": false, 00:21:14.848 "dhchap_key": "key1", 00:21:14.848 "dhchap_ctrlr_key": "ckey2", 00:21:14.848 "allow_unrecognized_csi": false 00:21:14.848 } 00:21:14.848 } 00:21:14.848 Got JSON-RPC error response 00:21:14.848 GoRPCClient: error on JSON-RPC call 00:21:14.848 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:14.848 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:14.848 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:14.848 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:14.848 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:21:14.848 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.848 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.848 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.848 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 00:21:14.848 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.848 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.848 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.848 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.848 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:14.848 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.849 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:14.849 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.849 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:14.849 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.849 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.849 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.849 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.417 2024/12/05 09:53:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:15.417 request: 00:21:15.417 { 00:21:15.417 "method": "bdev_nvme_attach_controller", 00:21:15.417 "params": { 00:21:15.417 "name": "nvme0", 00:21:15.417 "trtype": "tcp", 00:21:15.417 "traddr": "10.0.0.3", 00:21:15.417 "adrfam": "ipv4", 00:21:15.417 "trsvcid": "4420", 00:21:15.417 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:15.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:21:15.417 "prchk_reftag": false, 00:21:15.417 "prchk_guard": false, 00:21:15.417 "hdgst": false, 00:21:15.417 "ddgst": false, 00:21:15.417 "dhchap_key": "key1", 00:21:15.417 "dhchap_ctrlr_key": "ckey1", 00:21:15.417 "allow_unrecognized_csi": false 00:21:15.417 } 00:21:15.417 } 00:21:15.417 Got JSON-RPC error response 00:21:15.417 GoRPCClient: error on JSON-RPC call 00:21:15.417 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:15.417 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:15.417 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:15.417 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:15.417 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:21:15.417 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.417 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.417 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.417 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 83081 00:21:15.417 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 83081 ']' 00:21:15.417 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 83081 00:21:15.418 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:15.418 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:15.418 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83081 00:21:15.418 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:15.418 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:15.418 killing process with pid 83081 00:21:15.418 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83081' 00:21:15.418 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 83081 00:21:15.418 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 83081 00:21:16.355 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:16.355 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:16.355 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.355 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.355 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=87859 00:21:16.355 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:16.355 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 87859 00:21:16.355 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 87859 ']' 00:21:16.355 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.355 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.355 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.355 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.355 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.732 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.732 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:17.732 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:17.732 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:17.732 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.732 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.732 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:17.732 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 87859 00:21:17.732 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 87859 ']' 00:21:17.732 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.732 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.732 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.732 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.732 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.732 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.732 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:17.732 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:17.732 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.732 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.299 null0 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Uym 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.3zW ]] 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3zW 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.e4S 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.lxm ]] 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lxm 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.D79 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.rr9 ]] 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rr9 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.St9 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:18.299 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.300 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.236 nvme0n1 00:21:19.236 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.236 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.236 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.236 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.236 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.236 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.236 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.236 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.236 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.236 { 00:21:19.236 "auth": { 00:21:19.236 "dhgroup": "ffdhe8192", 00:21:19.236 "digest": "sha512", 00:21:19.236 "state": "completed" 00:21:19.236 }, 00:21:19.236 "cntlid": 1, 00:21:19.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:21:19.236 "listen_address": { 00:21:19.236 "adrfam": "IPv4", 00:21:19.236 "traddr": "10.0.0.3", 00:21:19.236 "trsvcid": "4420", 00:21:19.236 "trtype": "TCP" 00:21:19.236 }, 00:21:19.236 "peer_address": { 00:21:19.236 "adrfam": "IPv4", 00:21:19.236 "traddr": "10.0.0.1", 00:21:19.236 "trsvcid": "49142", 00:21:19.236 "trtype": "TCP" 00:21:19.236 }, 00:21:19.236 "qid": 0, 00:21:19.236 "state": "enabled", 00:21:19.236 "thread": "nvmf_tgt_poll_group_000" 00:21:19.236 } 00:21:19.236 ]' 00:21:19.236 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.506 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.506 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.506 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:19.506 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.506 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.506 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.506 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.781 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:21:19.781 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:21:20.348 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.348 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:21:20.348 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.348 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.606 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.606 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key3 00:21:20.606 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.606 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.606 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.606 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:20.606 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:20.865 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:20.865 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:20.865 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:20.865 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:20.865 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.865 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:20.865 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.865 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:20.865 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.865 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.124 2024/12/05 09:53:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:21.124 request: 00:21:21.124 { 00:21:21.124 "method": "bdev_nvme_attach_controller", 00:21:21.124 "params": { 00:21:21.124 "name": "nvme0", 00:21:21.124 "trtype": "tcp", 00:21:21.124 "traddr": "10.0.0.3", 00:21:21.124 "adrfam": "ipv4", 00:21:21.124 "trsvcid": "4420", 00:21:21.124 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:21.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:21:21.124 "prchk_reftag": false, 00:21:21.124 "prchk_guard": false, 00:21:21.124 "hdgst": false, 00:21:21.124 "ddgst": false, 00:21:21.124 "dhchap_key": "key3", 00:21:21.124 "allow_unrecognized_csi": false 00:21:21.124 } 00:21:21.124 } 00:21:21.124 Got JSON-RPC error response 00:21:21.124 GoRPCClient: error on JSON-RPC call 00:21:21.124 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:21.124 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:21.124 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:21.124 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:21.125 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:21.125 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:21.125 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:21.125 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:21.383 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:21.383 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:21.383 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:21.383 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:21.383 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.383 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:21.383 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.383 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:21.383 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.383 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.642 2024/12/05 09:53:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:21.642 request: 00:21:21.642 { 00:21:21.642 "method": "bdev_nvme_attach_controller", 00:21:21.642 "params": { 00:21:21.642 "name": "nvme0", 00:21:21.642 "trtype": "tcp", 00:21:21.642 "traddr": "10.0.0.3", 00:21:21.642 "adrfam": "ipv4", 00:21:21.642 "trsvcid": "4420", 00:21:21.642 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:21.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:21:21.642 "prchk_reftag": false, 00:21:21.642 "prchk_guard": false, 00:21:21.642 "hdgst": false, 00:21:21.642 "ddgst": false, 00:21:21.642 "dhchap_key": "key3", 00:21:21.642 "allow_unrecognized_csi": false 00:21:21.642 } 00:21:21.642 } 00:21:21.642 Got JSON-RPC error response 00:21:21.642 GoRPCClient: error on JSON-RPC call 00:21:21.642 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:21.642 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:21.642 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:21.643 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:21.643 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:21.643 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:21.643 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:21.643 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:21.643 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:21.643 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:21.901 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:21:21.901 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.901 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.901 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.901 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:21:21.902 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.902 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.902 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.902 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:21.902 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:21.902 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:21.902 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:21.902 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.902 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:21.902 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.902 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:21.902 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:21.902 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:22.469 2024/12/05 09:53:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:22.470 request: 00:21:22.470 { 00:21:22.470 "method": "bdev_nvme_attach_controller", 00:21:22.470 "params": { 00:21:22.470 "name": "nvme0", 00:21:22.470 "trtype": "tcp", 00:21:22.470 "traddr": "10.0.0.3", 00:21:22.470 "adrfam": "ipv4", 00:21:22.470 "trsvcid": "4420", 00:21:22.470 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:22.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:21:22.470 "prchk_reftag": false, 00:21:22.470 "prchk_guard": false, 00:21:22.470 "hdgst": false, 00:21:22.470 "ddgst": false, 00:21:22.470 "dhchap_key": "key0", 00:21:22.470 "dhchap_ctrlr_key": "key1", 00:21:22.470 "allow_unrecognized_csi": false 00:21:22.470 } 00:21:22.470 } 00:21:22.470 Got JSON-RPC error response 00:21:22.470 GoRPCClient: error on JSON-RPC call 00:21:22.470 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:22.470 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:22.470 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:22.470 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:22.470 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:22.470 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:22.470 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:22.728 nvme0n1 00:21:22.728 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:22.728 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:22.728 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.988 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.988 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.988 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.247 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 00:21:23.247 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.247 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.247 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.247 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:23.247 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:23.247 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:24.185 nvme0n1 00:21:24.185 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:24.185 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.185 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:24.444 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.444 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:24.444 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.444 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.444 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.444 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:24.444 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:24.444 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.703 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.703 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:21:24.703 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -l 0 --dhchap-secret DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: --dhchap-ctrl-secret DHHC-1:03:OTI4MjgxYjE2MGUxNzlhNjY4NjM3OWUyMzg0YWFjYmI4Y2IxZTJiYTA4MjkyNDA3MGVkZGIwMjA2ZGY2Mjg4NgOWTPs=: 00:21:25.271 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:25.271 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:25.271 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:25.271 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:25.271 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:25.271 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:25.271 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:25.271 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.271 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.531 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:25.531 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:25.531 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:25.531 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:25.531 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:25.531 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:25.531 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:25.531 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:25.531 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:25.531 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:26.100 2024/12/05 09:53:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:26.100 request: 00:21:26.100 { 00:21:26.100 "method": "bdev_nvme_attach_controller", 00:21:26.100 "params": { 00:21:26.100 "name": "nvme0", 00:21:26.100 "trtype": "tcp", 00:21:26.100 "traddr": "10.0.0.3", 00:21:26.100 "adrfam": "ipv4", 00:21:26.100 "trsvcid": "4420", 00:21:26.100 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:26.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516", 00:21:26.100 "prchk_reftag": false, 00:21:26.100 "prchk_guard": false, 00:21:26.100 "hdgst": false, 00:21:26.100 "ddgst": false, 00:21:26.100 "dhchap_key": "key1", 00:21:26.100 "allow_unrecognized_csi": false 00:21:26.100 } 00:21:26.100 } 00:21:26.100 Got JSON-RPC error response 00:21:26.100 GoRPCClient: error on JSON-RPC call 00:21:26.100 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:26.100 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:26.100 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:26.100 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:26.100 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:26.100 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:26.100 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:27.042 nvme0n1 00:21:27.042 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:27.042 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:27.042 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.300 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.300 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.300 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.562 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:21:27.562 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.562 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.562 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.562 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:27.562 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:27.562 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:27.820 nvme0n1 00:21:27.820 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:27.820 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:27.820 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.078 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.078 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.078 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.336 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:28.336 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.336 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.336 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.336 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: '' 2s 00:21:28.336 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:28.336 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:28.336 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: 00:21:28.336 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:28.336 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:28.336 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:28.336 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: ]] 00:21:28.336 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MzJkNjIwMzczNzdhODdjOWFlOTdmM2IwMzY4MDQ1MmKwT2Ig: 00:21:28.336 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:28.336 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:28.336 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:30.242 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:30.242 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:30.242 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:30.242 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:30.499 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:30.499 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:30.499 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:30.499 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:30.499 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.499 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.499 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.499 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: 2s 00:21:30.499 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:30.499 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:30.499 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:30.499 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: 00:21:30.499 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:30.499 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:30.499 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:30.499 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: ]] 00:21:30.499 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MmY1ZGI1ZDlmYTBlZTZkMmYyZjYyMTUzZDM4NTUwOWQ1Y2U1OTQzMzkxYzY3Y2FmT0PIJQ==: 00:21:30.499 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:30.499 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:32.400 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:32.400 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:32.400 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:32.400 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:32.400 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:32.400 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:32.400 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:32.400 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.400 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:32.400 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.400 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.400 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.400 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:32.400 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:32.400 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:33.335 nvme0n1 00:21:33.335 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:33.335 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.335 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.335 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.335 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:33.335 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:33.902 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:33.902 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:33.902 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.162 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.162 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:21:34.162 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.162 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.162 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.162 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:34.162 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:34.731 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:34.731 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.731 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:34.731 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.731 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:34.731 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.731 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.731 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.731 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:34.731 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:34.731 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:34.731 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:34.731 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.731 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:34.731 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.731 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:34.731 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:35.299 2024/12/05 09:53:22 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:21:35.299 request: 00:21:35.299 { 00:21:35.299 "method": "bdev_nvme_set_keys", 00:21:35.299 "params": { 00:21:35.299 "name": "nvme0", 00:21:35.299 "dhchap_key": "key1", 00:21:35.299 "dhchap_ctrlr_key": "key3" 00:21:35.299 } 00:21:35.299 } 00:21:35.299 Got JSON-RPC error response 00:21:35.299 GoRPCClient: error on JSON-RPC call 00:21:35.299 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:35.300 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:35.300 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:35.300 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:35.300 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:35.300 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:35.300 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.559 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:35.559 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:36.944 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:36.944 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:36.944 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.944 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:36.944 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:36.944 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.944 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.944 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.944 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:36.944 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:36.944 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:37.881 nvme0n1 00:21:37.881 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:37.881 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.881 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.881 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.881 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:37.881 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:37.881 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:37.881 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:37.881 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.881 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:37.881 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.881 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:37.881 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:38.449 2024/12/05 09:53:25 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:21:38.449 request: 00:21:38.449 { 00:21:38.449 "method": "bdev_nvme_set_keys", 00:21:38.449 "params": { 00:21:38.449 "name": "nvme0", 00:21:38.449 "dhchap_key": "key2", 00:21:38.449 "dhchap_ctrlr_key": "key0" 00:21:38.449 } 00:21:38.449 } 00:21:38.449 Got JSON-RPC error response 00:21:38.449 GoRPCClient: error on JSON-RPC call 00:21:38.449 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:38.449 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.449 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.449 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.449 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:38.449 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:38.449 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.707 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:38.707 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:40.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:40.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:40.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:40.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:40.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:40.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 83125 00:21:40.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 83125 ']' 00:21:40.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 83125 00:21:40.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:40.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:40.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83125 00:21:40.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:40.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:40.084 killing process with pid 83125 00:21:40.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83125' 00:21:40.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 83125 00:21:40.084 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 83125 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:41.991 rmmod nvme_tcp 00:21:41.991 rmmod nvme_fabrics 00:21:41.991 rmmod nvme_keyring 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 87859 ']' 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 87859 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 87859 ']' 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 87859 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87859 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:41.991 killing process with pid 87859 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87859' 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 87859 00:21:41.991 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 87859 00:21:42.929 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:42.929 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:42.929 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:42.929 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:42.929 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:21:42.929 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:42.929 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:42.929 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:42.929 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:42.929 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:42.929 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Uym /tmp/spdk.key-sha256.e4S /tmp/spdk.key-sha384.D79 /tmp/spdk.key-sha512.St9 /tmp/spdk.key-sha512.3zW /tmp/spdk.key-sha384.lxm /tmp/spdk.key-sha256.rr9 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:21:43.188 00:21:43.188 real 3m4.765s 00:21:43.188 user 7m25.081s 00:21:43.188 sys 0m22.157s 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:43.188 ************************************ 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.188 END TEST nvmf_auth_target 00:21:43.188 ************************************ 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:43.188 09:53:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:43.448 ************************************ 00:21:43.448 START TEST nvmf_bdevio_no_huge 00:21:43.448 ************************************ 00:21:43.448 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:43.448 * Looking for test storage... 00:21:43.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:43.448 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:43.448 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:21:43.448 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:43.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.448 --rc genhtml_branch_coverage=1 00:21:43.448 --rc genhtml_function_coverage=1 00:21:43.448 --rc genhtml_legend=1 00:21:43.448 --rc geninfo_all_blocks=1 00:21:43.448 --rc geninfo_unexecuted_blocks=1 00:21:43.448 00:21:43.448 ' 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:43.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.448 --rc genhtml_branch_coverage=1 00:21:43.448 --rc genhtml_function_coverage=1 00:21:43.448 --rc genhtml_legend=1 00:21:43.448 --rc geninfo_all_blocks=1 00:21:43.448 --rc geninfo_unexecuted_blocks=1 00:21:43.448 00:21:43.448 ' 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:43.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.448 --rc genhtml_branch_coverage=1 00:21:43.448 --rc genhtml_function_coverage=1 00:21:43.448 --rc genhtml_legend=1 00:21:43.448 --rc geninfo_all_blocks=1 00:21:43.448 --rc geninfo_unexecuted_blocks=1 00:21:43.448 00:21:43.448 ' 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:43.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.448 --rc genhtml_branch_coverage=1 00:21:43.448 --rc genhtml_function_coverage=1 00:21:43.448 --rc genhtml_legend=1 00:21:43.448 --rc geninfo_all_blocks=1 00:21:43.448 --rc geninfo_unexecuted_blocks=1 00:21:43.448 00:21:43.448 ' 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:21:43.448 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:43.449 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:43.449 Cannot find device "nvmf_init_br" 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:21:43.449 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:43.449 Cannot find device "nvmf_init_br2" 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:43.709 Cannot find device "nvmf_tgt_br" 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:43.709 Cannot find device "nvmf_tgt_br2" 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:43.709 Cannot find device "nvmf_init_br" 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:43.709 Cannot find device "nvmf_init_br2" 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:43.709 Cannot find device "nvmf_tgt_br" 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:43.709 Cannot find device "nvmf_tgt_br2" 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:43.709 Cannot find device "nvmf_br" 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:43.709 Cannot find device "nvmf_init_if" 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:43.709 Cannot find device "nvmf_init_if2" 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:43.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:43.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:43.709 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:43.968 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:43.968 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:43.968 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:43.968 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:43.968 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:43.968 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:43.968 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:43.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:43.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:21:43.969 00:21:43.969 --- 10.0.0.3 ping statistics --- 00:21:43.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.969 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:43.969 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:43.969 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:21:43.969 00:21:43.969 --- 10.0.0.4 ping statistics --- 00:21:43.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.969 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:43.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:21:43.969 00:21:43.969 --- 10.0.0.1 ping statistics --- 00:21:43.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.969 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:43.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:21:43.969 00:21:43.969 --- 10.0.0.2 ping statistics --- 00:21:43.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.969 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=88744 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 88744 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 88744 ']' 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.969 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:44.228 [2024-12-05 09:53:31.646708] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:21:44.228 [2024-12-05 09:53:31.646888] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:44.487 [2024-12-05 09:53:31.875294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:44.487 [2024-12-05 09:53:32.052390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.487 [2024-12-05 09:53:32.052479] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.487 [2024-12-05 09:53:32.052502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.487 [2024-12-05 09:53:32.052522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.487 [2024-12-05 09:53:32.052537] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.487 [2024-12-05 09:53:32.054687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:44.487 [2024-12-05 09:53:32.054846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:21:44.487 [2024-12-05 09:53:32.055021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:21:44.487 [2024-12-05 09:53:32.055419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:45.056 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.056 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:21:45.056 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:45.056 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:45.056 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:45.056 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.056 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:45.056 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.056 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:45.056 [2024-12-05 09:53:32.679982] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:45.316 Malloc0 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:45.316 [2024-12-05 09:53:32.772033] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.316 { 00:21:45.316 "params": { 00:21:45.316 "name": "Nvme$subsystem", 00:21:45.316 "trtype": "$TEST_TRANSPORT", 00:21:45.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.316 "adrfam": "ipv4", 00:21:45.316 "trsvcid": "$NVMF_PORT", 00:21:45.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.316 "hdgst": ${hdgst:-false}, 00:21:45.316 "ddgst": ${ddgst:-false} 00:21:45.316 }, 00:21:45.316 "method": "bdev_nvme_attach_controller" 00:21:45.316 } 00:21:45.316 EOF 00:21:45.316 )") 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:21:45.316 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:45.316 "params": { 00:21:45.316 "name": "Nvme1", 00:21:45.316 "trtype": "tcp", 00:21:45.316 "traddr": "10.0.0.3", 00:21:45.316 "adrfam": "ipv4", 00:21:45.316 "trsvcid": "4420", 00:21:45.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:45.316 "hdgst": false, 00:21:45.316 "ddgst": false 00:21:45.316 }, 00:21:45.316 "method": "bdev_nvme_attach_controller" 00:21:45.316 }' 00:21:45.316 [2024-12-05 09:53:32.900847] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:21:45.316 [2024-12-05 09:53:32.901081] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid88804 ] 00:21:45.575 [2024-12-05 09:53:33.133988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:45.834 [2024-12-05 09:53:33.304830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.834 [2024-12-05 09:53:33.305011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.834 [2024-12-05 09:53:33.305029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.403 I/O targets: 00:21:46.403 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:46.403 00:21:46.403 00:21:46.403 CUnit - A unit testing framework for C - Version 2.1-3 00:21:46.403 http://cunit.sourceforge.net/ 00:21:46.403 00:21:46.403 00:21:46.403 Suite: bdevio tests on: Nvme1n1 00:21:46.403 Test: blockdev write read block ...passed 00:21:46.403 Test: blockdev write zeroes read block ...passed 00:21:46.403 Test: blockdev write zeroes read no split ...passed 00:21:46.403 Test: blockdev write zeroes read split ...passed 00:21:46.403 Test: blockdev write zeroes read split partial ...passed 00:21:46.403 Test: blockdev reset ...[2024-12-05 09:53:33.926861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:46.403 [2024-12-05 09:53:33.927059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:21:46.403 [2024-12-05 09:53:33.946146] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:21:46.403 passed 00:21:46.403 Test: blockdev write read 8 blocks ...passed 00:21:46.403 Test: blockdev write read size > 128k ...passed 00:21:46.403 Test: blockdev write read invalid size ...passed 00:21:46.403 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:46.403 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:46.403 Test: blockdev write read max offset ...passed 00:21:46.696 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:46.696 Test: blockdev writev readv 8 blocks ...passed 00:21:46.696 Test: blockdev writev readv 30 x 1block ...passed 00:21:46.696 Test: blockdev writev readv block ...passed 00:21:46.696 Test: blockdev writev readv size > 128k ...passed 00:21:46.696 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:46.696 Test: blockdev comparev and writev ...[2024-12-05 09:53:34.125164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:46.696 [2024-12-05 09:53:34.125225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.696 [2024-12-05 09:53:34.125273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:46.696 [2024-12-05 09:53:34.125290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.696 [2024-12-05 09:53:34.125763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:46.696 [2024-12-05 09:53:34.125821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.696 [2024-12-05 09:53:34.125846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:46.696 [2024-12-05 09:53:34.125860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.696 [2024-12-05 09:53:34.126486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:46.696 [2024-12-05 09:53:34.126538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.696 [2024-12-05 09:53:34.126578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:46.696 [2024-12-05 09:53:34.126593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.696 [2024-12-05 09:53:34.127023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:46.696 [2024-12-05 09:53:34.127057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.696 [2024-12-05 09:53:34.127096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:46.696 [2024-12-05 09:53:34.127111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.696 passed 00:21:46.696 Test: blockdev nvme passthru rw ...passed 00:21:46.696 Test: blockdev nvme passthru vendor specific ...[2024-12-05 09:53:34.209368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:46.696 [2024-12-05 09:53:34.209412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.696 [2024-12-05 09:53:34.209642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:46.696 [2024-12-05 09:53:34.209665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:46.696 [2024-12-05 09:53:34.209828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:46.696 [2024-12-05 09:53:34.209860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:46.696 [2024-12-05 09:53:34.210104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:46.696 [2024-12-05 09:53:34.210141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:46.696 passed 00:21:46.696 Test: blockdev nvme admin passthru ...passed 00:21:46.696 Test: blockdev copy ...passed 00:21:46.696 00:21:46.696 Run Summary: Type Total Ran Passed Failed Inactive 00:21:46.696 suites 1 1 n/a 0 0 00:21:46.696 tests 23 23 23 0 0 00:21:46.696 asserts 152 152 152 0 n/a 00:21:46.696 00:21:46.696 Elapsed time = 1.026 seconds 00:21:47.670 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:47.670 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.670 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:47.670 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.670 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:47.670 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:47.670 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:47.670 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:21:47.671 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:47.671 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:21:47.671 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:47.671 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:47.671 rmmod nvme_tcp 00:21:47.671 rmmod nvme_fabrics 00:21:47.671 rmmod nvme_keyring 00:21:47.671 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:47.671 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:21:47.671 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:21:47.671 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 88744 ']' 00:21:47.671 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 88744 00:21:47.671 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 88744 ']' 00:21:47.671 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 88744 00:21:47.671 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:21:47.671 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.671 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88744 00:21:47.671 killing process with pid 88744 00:21:47.671 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:21:47.671 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:21:47.671 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88744' 00:21:47.671 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 88744 00:21:47.671 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 88744 00:21:48.236 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:48.236 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:48.236 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:48.236 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:21:48.236 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:21:48.236 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:48.236 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:21:48.236 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:48.236 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:48.236 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:48.236 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:48.494 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:48.494 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:48.494 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:48.494 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:48.494 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:48.494 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:48.494 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:48.494 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:48.494 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:48.494 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:48.494 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:48.494 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:48.494 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.494 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.494 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.494 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:21:48.494 ************************************ 00:21:48.494 END TEST nvmf_bdevio_no_huge 00:21:48.494 ************************************ 00:21:48.494 00:21:48.494 real 0m5.253s 00:21:48.494 user 0m18.888s 00:21:48.494 sys 0m1.808s 00:21:48.494 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:48.494 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:48.753 ************************************ 00:21:48.753 START TEST nvmf_tls 00:21:48.753 ************************************ 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:48.753 * Looking for test storage... 00:21:48.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:48.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.753 --rc genhtml_branch_coverage=1 00:21:48.753 --rc genhtml_function_coverage=1 00:21:48.753 --rc genhtml_legend=1 00:21:48.753 --rc geninfo_all_blocks=1 00:21:48.753 --rc geninfo_unexecuted_blocks=1 00:21:48.753 00:21:48.753 ' 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:48.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.753 --rc genhtml_branch_coverage=1 00:21:48.753 --rc genhtml_function_coverage=1 00:21:48.753 --rc genhtml_legend=1 00:21:48.753 --rc geninfo_all_blocks=1 00:21:48.753 --rc geninfo_unexecuted_blocks=1 00:21:48.753 00:21:48.753 ' 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:48.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.753 --rc genhtml_branch_coverage=1 00:21:48.753 --rc genhtml_function_coverage=1 00:21:48.753 --rc genhtml_legend=1 00:21:48.753 --rc geninfo_all_blocks=1 00:21:48.753 --rc geninfo_unexecuted_blocks=1 00:21:48.753 00:21:48.753 ' 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:48.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.753 --rc genhtml_branch_coverage=1 00:21:48.753 --rc genhtml_function_coverage=1 00:21:48.753 --rc genhtml_legend=1 00:21:48.753 --rc geninfo_all_blocks=1 00:21:48.753 --rc geninfo_unexecuted_blocks=1 00:21:48.753 00:21:48.753 ' 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:48.753 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:48.754 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:48.754 Cannot find device "nvmf_init_br" 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:48.754 Cannot find device "nvmf_init_br2" 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:48.754 Cannot find device "nvmf_tgt_br" 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:48.754 Cannot find device "nvmf_tgt_br2" 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:21:48.754 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:49.012 Cannot find device "nvmf_init_br" 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:49.012 Cannot find device "nvmf_init_br2" 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:49.012 Cannot find device "nvmf_tgt_br" 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:49.012 Cannot find device "nvmf_tgt_br2" 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:49.012 Cannot find device "nvmf_br" 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:49.012 Cannot find device "nvmf_init_if" 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:49.012 Cannot find device "nvmf_init_if2" 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:49.012 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:49.012 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:49.012 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:49.013 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:49.013 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:49.013 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:49.013 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:49.013 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:49.013 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:49.013 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:49.013 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:49.271 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:49.271 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:21:49.271 00:21:49.271 --- 10.0.0.3 ping statistics --- 00:21:49.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.271 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:49.271 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:49.271 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:21:49.271 00:21:49.271 --- 10.0.0.4 ping statistics --- 00:21:49.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.271 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:49.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:21:49.271 00:21:49.271 --- 10.0.0.1 ping statistics --- 00:21:49.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.271 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:49.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:21:49.271 00:21:49.271 --- 10.0.0.2 ping statistics --- 00:21:49.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.271 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=89083 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 89083 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 89083 ']' 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.271 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.271 [2024-12-05 09:53:36.838170] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:21:49.271 [2024-12-05 09:53:36.838350] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.529 [2024-12-05 09:53:37.035174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.786 [2024-12-05 09:53:37.181870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.786 [2024-12-05 09:53:37.181982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.786 [2024-12-05 09:53:37.182005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.786 [2024-12-05 09:53:37.182064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.786 [2024-12-05 09:53:37.182080] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.786 [2024-12-05 09:53:37.183437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.353 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.353 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:50.353 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:50.353 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.353 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.353 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.353 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:21:50.353 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:50.612 true 00:21:50.612 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:21:50.612 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:50.871 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:21:50.871 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:21:50.871 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:51.129 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:51.129 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:21:51.388 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:21:51.388 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:21:51.388 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:51.646 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:21:51.646 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:51.903 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:21:51.903 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:21:51.903 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:51.903 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:21:52.162 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:21:52.162 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:21:52.162 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:52.420 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:52.420 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:52.420 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:21:52.420 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:21:52.420 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:52.694 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:52.694 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:52.952 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:21:52.952 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:21:52.952 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:52.952 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:52.952 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:52.952 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:52.952 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:52.952 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:21:52.952 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:53.211 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:53.211 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:53.211 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:53.211 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:53.211 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:53.211 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:21:53.211 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:21:53.211 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:53.211 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:53.211 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:53.211 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.tzOfF2Jkqy 00:21:53.211 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:21:53.211 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Cb9PmrPLax 00:21:53.211 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:53.211 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:53.211 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.tzOfF2Jkqy 00:21:53.211 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Cb9PmrPLax 00:21:53.211 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:53.470 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:54.038 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.tzOfF2Jkqy 00:21:54.038 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.tzOfF2Jkqy 00:21:54.038 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:54.296 [2024-12-05 09:53:41.723051] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.296 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:54.555 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:21:54.814 [2024-12-05 09:53:42.267198] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:54.814 [2024-12-05 09:53:42.267539] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:54.814 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:55.073 malloc0 00:21:55.073 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:55.332 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.tzOfF2Jkqy 00:21:55.590 09:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:55.849 09:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.tzOfF2Jkqy 00:22:08.046 Initializing NVMe Controllers 00:22:08.046 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:22:08.046 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:08.046 Initialization complete. Launching workers. 00:22:08.046 ======================================================== 00:22:08.046 Latency(us) 00:22:08.046 Device Information : IOPS MiB/s Average min max 00:22:08.046 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7655.49 29.90 8362.43 2499.90 10991.42 00:22:08.046 ======================================================== 00:22:08.046 Total : 7655.49 29.90 8362.43 2499.90 10991.42 00:22:08.046 00:22:08.046 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tzOfF2Jkqy 00:22:08.046 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:08.046 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:08.046 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:08.046 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tzOfF2Jkqy 00:22:08.046 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:08.046 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=89458 00:22:08.046 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:08.046 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 89458 /var/tmp/bdevperf.sock 00:22:08.046 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 89458 ']' 00:22:08.046 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:08.046 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:08.046 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:08.046 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.046 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.046 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:08.046 [2024-12-05 09:53:53.843509] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:08.046 [2024-12-05 09:53:53.843686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89458 ] 00:22:08.046 [2024-12-05 09:53:54.030837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.046 [2024-12-05 09:53:54.173896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.046 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.046 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:08.046 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tzOfF2Jkqy 00:22:08.046 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:08.046 [2024-12-05 09:53:55.113700] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:08.046 TLSTESTn1 00:22:08.046 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:08.046 Running I/O for 10 seconds... 00:22:09.920 3314.00 IOPS, 12.95 MiB/s [2024-12-05T09:53:58.485Z] 3328.00 IOPS, 13.00 MiB/s [2024-12-05T09:53:59.421Z] 3328.00 IOPS, 13.00 MiB/s [2024-12-05T09:54:00.381Z] 3328.00 IOPS, 13.00 MiB/s [2024-12-05T09:54:01.757Z] 3328.00 IOPS, 13.00 MiB/s [2024-12-05T09:54:02.695Z] 3328.00 IOPS, 13.00 MiB/s [2024-12-05T09:54:03.630Z] 3322.14 IOPS, 12.98 MiB/s [2024-12-05T09:54:04.567Z] 3331.50 IOPS, 13.01 MiB/s [2024-12-05T09:54:05.502Z] 3346.22 IOPS, 13.07 MiB/s [2024-12-05T09:54:05.502Z] 3360.90 IOPS, 13.13 MiB/s 00:22:17.873 Latency(us) 00:22:17.873 [2024-12-05T09:54:05.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.873 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:17.873 Verification LBA range: start 0x0 length 0x2000 00:22:17.873 TLSTESTn1 : 10.02 3367.03 13.15 0.00 0.00 37950.61 6106.76 28478.37 00:22:17.873 [2024-12-05T09:54:05.502Z] =================================================================================================================== 00:22:17.873 [2024-12-05T09:54:05.502Z] Total : 3367.03 13.15 0.00 0.00 37950.61 6106.76 28478.37 00:22:17.873 { 00:22:17.873 "results": [ 00:22:17.873 { 00:22:17.873 "job": "TLSTESTn1", 00:22:17.873 "core_mask": "0x4", 00:22:17.873 "workload": "verify", 00:22:17.873 "status": "finished", 00:22:17.873 "verify_range": { 00:22:17.873 "start": 0, 00:22:17.873 "length": 8192 00:22:17.873 }, 00:22:17.873 "queue_depth": 128, 00:22:17.873 "io_size": 4096, 00:22:17.873 "runtime": 10.018616, 00:22:17.873 "iops": 3367.0319333528705, 00:22:17.873 "mibps": 13.15246848965965, 00:22:17.873 "io_failed": 0, 00:22:17.873 "io_timeout": 0, 00:22:17.873 "avg_latency_us": 37950.61124606873, 00:22:17.873 "min_latency_us": 6106.763636363637, 00:22:17.873 "max_latency_us": 28478.37090909091 00:22:17.873 } 00:22:17.873 ], 00:22:17.873 "core_count": 1 00:22:17.873 } 00:22:17.873 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:17.873 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 89458 00:22:17.873 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 89458 ']' 00:22:17.873 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 89458 00:22:17.873 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:17.873 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:17.873 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89458 00:22:17.873 killing process with pid 89458 00:22:17.873 Received shutdown signal, test time was about 10.000000 seconds 00:22:17.873 00:22:17.873 Latency(us) 00:22:17.873 [2024-12-05T09:54:05.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.873 [2024-12-05T09:54:05.502Z] =================================================================================================================== 00:22:17.873 [2024-12-05T09:54:05.502Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:17.873 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:17.873 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:17.873 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89458' 00:22:17.873 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 89458 00:22:17.873 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 89458 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Cb9PmrPLax 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Cb9PmrPLax 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Cb9PmrPLax 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Cb9PmrPLax 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=89629 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 89629 /var/tmp/bdevperf.sock 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 89629 ']' 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.805 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.062 [2024-12-05 09:54:06.524562] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:19.062 [2024-12-05 09:54:06.524758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89629 ] 00:22:19.320 [2024-12-05 09:54:06.706222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.320 [2024-12-05 09:54:06.816888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.886 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.886 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:19.886 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Cb9PmrPLax 00:22:20.146 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:20.404 [2024-12-05 09:54:07.878437] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:20.404 [2024-12-05 09:54:07.887925] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:20.404 [2024-12-05 09:54:07.888841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:22:20.404 [2024-12-05 09:54:07.889816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:22:20.404 [2024-12-05 09:54:07.890815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:20.404 [2024-12-05 09:54:07.890863] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:22:20.404 [2024-12-05 09:54:07.890887] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:20.404 [2024-12-05 09:54:07.890908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:20.404 2024/12/05 09:54:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:20.404 request: 00:22:20.404 { 00:22:20.404 "method": "bdev_nvme_attach_controller", 00:22:20.405 "params": { 00:22:20.405 "name": "TLSTEST", 00:22:20.405 "trtype": "tcp", 00:22:20.405 "traddr": "10.0.0.3", 00:22:20.405 "adrfam": "ipv4", 00:22:20.405 "trsvcid": "4420", 00:22:20.405 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.405 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:20.405 "prchk_reftag": false, 00:22:20.405 "prchk_guard": false, 00:22:20.405 "hdgst": false, 00:22:20.405 "ddgst": false, 00:22:20.405 "psk": "key0", 00:22:20.405 "allow_unrecognized_csi": false 00:22:20.405 } 00:22:20.405 } 00:22:20.405 Got JSON-RPC error response 00:22:20.405 GoRPCClient: error on JSON-RPC call 00:22:20.405 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 89629 00:22:20.405 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 89629 ']' 00:22:20.405 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 89629 00:22:20.405 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:20.405 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:20.405 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89629 00:22:20.405 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:20.405 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:20.405 killing process with pid 89629 00:22:20.405 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89629' 00:22:20.405 Received shutdown signal, test time was about 10.000000 seconds 00:22:20.405 00:22:20.405 Latency(us) 00:22:20.405 [2024-12-05T09:54:08.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.405 [2024-12-05T09:54:08.034Z] =================================================================================================================== 00:22:20.405 [2024-12-05T09:54:08.034Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:20.405 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 89629 00:22:20.405 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 89629 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tzOfF2Jkqy 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tzOfF2Jkqy 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tzOfF2Jkqy 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tzOfF2Jkqy 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=89689 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 89689 /var/tmp/bdevperf.sock 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 89689 ']' 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:21.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.370 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.629 [2024-12-05 09:54:09.038855] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:21.629 [2024-12-05 09:54:09.039079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89689 ] 00:22:21.629 [2024-12-05 09:54:09.220149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.888 [2024-12-05 09:54:09.335046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.456 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.456 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:22.456 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tzOfF2Jkqy 00:22:22.715 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:22.974 [2024-12-05 09:54:10.394605] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:22.974 [2024-12-05 09:54:10.405549] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:22.974 [2024-12-05 09:54:10.405599] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:22.974 [2024-12-05 09:54:10.405698] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:22.974 [2024-12-05 09:54:10.406628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:22:22.974 [2024-12-05 09:54:10.407603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:22:22.974 [2024-12-05 09:54:10.408604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:22.975 [2024-12-05 09:54:10.408633] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:22:22.975 [2024-12-05 09:54:10.408653] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:22.975 [2024-12-05 09:54:10.408674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:22.975 2024/12/05 09:54:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:22.975 request: 00:22:22.975 { 00:22:22.975 "method": "bdev_nvme_attach_controller", 00:22:22.975 "params": { 00:22:22.975 "name": "TLSTEST", 00:22:22.975 "trtype": "tcp", 00:22:22.975 "traddr": "10.0.0.3", 00:22:22.975 "adrfam": "ipv4", 00:22:22.975 "trsvcid": "4420", 00:22:22.975 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.975 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:22.975 "prchk_reftag": false, 00:22:22.975 "prchk_guard": false, 00:22:22.975 "hdgst": false, 00:22:22.975 "ddgst": false, 00:22:22.975 "psk": "key0", 00:22:22.975 "allow_unrecognized_csi": false 00:22:22.975 } 00:22:22.975 } 00:22:22.975 Got JSON-RPC error response 00:22:22.975 GoRPCClient: error on JSON-RPC call 00:22:22.975 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 89689 00:22:22.975 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 89689 ']' 00:22:22.975 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 89689 00:22:22.975 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:22.975 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:22.975 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89689 00:22:22.975 killing process with pid 89689 00:22:22.975 Received shutdown signal, test time was about 10.000000 seconds 00:22:22.975 00:22:22.975 Latency(us) 00:22:22.975 [2024-12-05T09:54:10.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.975 [2024-12-05T09:54:10.604Z] =================================================================================================================== 00:22:22.975 [2024-12-05T09:54:10.604Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:22.975 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:22.975 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:22.975 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89689' 00:22:22.975 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 89689 00:22:22.975 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 89689 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tzOfF2Jkqy 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tzOfF2Jkqy 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tzOfF2Jkqy 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tzOfF2Jkqy 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=89748 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:23.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 89748 /var/tmp/bdevperf.sock 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 89748 ']' 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.912 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.912 [2024-12-05 09:54:11.525891] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:23.912 [2024-12-05 09:54:11.526118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89748 ] 00:22:24.172 [2024-12-05 09:54:11.693533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.430 [2024-12-05 09:54:11.805876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.996 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.996 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:24.996 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tzOfF2Jkqy 00:22:24.996 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:25.255 [2024-12-05 09:54:12.759716] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:25.255 [2024-12-05 09:54:12.768589] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:25.255 [2024-12-05 09:54:12.768636] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:25.255 [2024-12-05 09:54:12.768734] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:25.255 [2024-12-05 09:54:12.768899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:22:25.255 [2024-12-05 09:54:12.769865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:22:25.255 [2024-12-05 09:54:12.770855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:22:25.255 [2024-12-05 09:54:12.770893] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:22:25.255 [2024-12-05 09:54:12.770910] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:25.255 [2024-12-05 09:54:12.770943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:22:25.255 2024/12/05 09:54:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:25.255 request: 00:22:25.255 { 00:22:25.255 "method": "bdev_nvme_attach_controller", 00:22:25.255 "params": { 00:22:25.255 "name": "TLSTEST", 00:22:25.255 "trtype": "tcp", 00:22:25.255 "traddr": "10.0.0.3", 00:22:25.255 "adrfam": "ipv4", 00:22:25.255 "trsvcid": "4420", 00:22:25.255 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:25.255 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:25.255 "prchk_reftag": false, 00:22:25.255 "prchk_guard": false, 00:22:25.255 "hdgst": false, 00:22:25.255 "ddgst": false, 00:22:25.255 "psk": "key0", 00:22:25.255 "allow_unrecognized_csi": false 00:22:25.255 } 00:22:25.255 } 00:22:25.255 Got JSON-RPC error response 00:22:25.255 GoRPCClient: error on JSON-RPC call 00:22:25.255 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 89748 00:22:25.255 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 89748 ']' 00:22:25.255 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 89748 00:22:25.255 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:25.255 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.255 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89748 00:22:25.255 killing process with pid 89748 00:22:25.255 Received shutdown signal, test time was about 10.000000 seconds 00:22:25.255 00:22:25.255 Latency(us) 00:22:25.256 [2024-12-05T09:54:12.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.256 [2024-12-05T09:54:12.885Z] =================================================================================================================== 00:22:25.256 [2024-12-05T09:54:12.885Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:25.256 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:25.256 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:25.256 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89748' 00:22:25.256 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 89748 00:22:25.256 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 89748 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=89813 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 89813 /var/tmp/bdevperf.sock 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 89813 ']' 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.191 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:26.192 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.192 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:26.192 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.451 [2024-12-05 09:54:13.864340] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:26.451 [2024-12-05 09:54:13.864807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89813 ] 00:22:26.451 [2024-12-05 09:54:14.028336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.710 [2024-12-05 09:54:14.143340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.275 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:27.275 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:27.275 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:27.533 [2024-12-05 09:54:15.043713] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:27.533 [2024-12-05 09:54:15.043775] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:27.533 2024/12/05 09:54:15 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:22:27.533 request: 00:22:27.533 { 00:22:27.533 "method": "keyring_file_add_key", 00:22:27.533 "params": { 00:22:27.533 "name": "key0", 00:22:27.533 "path": "" 00:22:27.533 } 00:22:27.533 } 00:22:27.533 Got JSON-RPC error response 00:22:27.533 GoRPCClient: error on JSON-RPC call 00:22:27.534 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:27.791 [2024-12-05 09:54:15.323980] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:27.791 [2024-12-05 09:54:15.324065] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:27.791 2024/12/05 09:54:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:22:27.791 request: 00:22:27.791 { 00:22:27.791 "method": "bdev_nvme_attach_controller", 00:22:27.791 "params": { 00:22:27.791 "name": "TLSTEST", 00:22:27.791 "trtype": "tcp", 00:22:27.791 "traddr": "10.0.0.3", 00:22:27.791 "adrfam": "ipv4", 00:22:27.791 "trsvcid": "4420", 00:22:27.791 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.791 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.791 "prchk_reftag": false, 00:22:27.791 "prchk_guard": false, 00:22:27.791 "hdgst": false, 00:22:27.791 "ddgst": false, 00:22:27.791 "psk": "key0", 00:22:27.791 "allow_unrecognized_csi": false 00:22:27.791 } 00:22:27.791 } 00:22:27.791 Got JSON-RPC error response 00:22:27.791 GoRPCClient: error on JSON-RPC call 00:22:27.791 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 89813 00:22:27.791 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 89813 ']' 00:22:27.791 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 89813 00:22:27.791 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:27.791 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:27.791 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89813 00:22:27.791 killing process with pid 89813 00:22:27.791 Received shutdown signal, test time was about 10.000000 seconds 00:22:27.791 00:22:27.791 Latency(us) 00:22:27.791 [2024-12-05T09:54:15.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.791 [2024-12-05T09:54:15.420Z] =================================================================================================================== 00:22:27.791 [2024-12-05T09:54:15.420Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:27.791 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:27.791 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:27.791 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89813' 00:22:27.791 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 89813 00:22:27.791 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 89813 00:22:28.726 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:28.726 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:28.726 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:28.726 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:28.726 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:28.726 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 89083 00:22:28.726 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 89083 ']' 00:22:28.726 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 89083 00:22:28.726 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:28.726 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.726 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89083 00:22:28.726 killing process with pid 89083 00:22:28.726 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:28.726 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:28.726 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89083' 00:22:28.726 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 89083 00:22:28.726 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 89083 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Dd3IU6Mn4R 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Dd3IU6Mn4R 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=89894 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 89894 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 89894 ']' 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.099 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:30.100 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.100 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.100 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.100 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.100 [2024-12-05 09:54:17.522295] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:30.100 [2024-12-05 09:54:17.522458] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.100 [2024-12-05 09:54:17.692100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.358 [2024-12-05 09:54:17.795568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.358 [2024-12-05 09:54:17.795640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.358 [2024-12-05 09:54:17.795658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.358 [2024-12-05 09:54:17.795684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.358 [2024-12-05 09:54:17.795697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.358 [2024-12-05 09:54:17.796726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.925 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.925 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:30.925 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:30.925 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:30.925 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.925 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.925 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Dd3IU6Mn4R 00:22:30.925 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Dd3IU6Mn4R 00:22:30.925 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:31.184 [2024-12-05 09:54:18.731405] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.184 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:31.442 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:22:31.700 [2024-12-05 09:54:19.303478] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:31.700 [2024-12-05 09:54:19.303744] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:31.700 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:31.957 malloc0 00:22:31.957 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:32.524 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Dd3IU6Mn4R 00:22:32.524 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:32.783 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Dd3IU6Mn4R 00:22:32.783 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:32.783 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:32.783 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:32.783 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Dd3IU6Mn4R 00:22:32.783 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:32.783 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=90003 00:22:32.783 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:32.783 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:32.783 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 90003 /var/tmp/bdevperf.sock 00:22:32.783 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 90003 ']' 00:22:32.783 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.783 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.783 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.783 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.783 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.042 [2024-12-05 09:54:20.456508] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:33.042 [2024-12-05 09:54:20.456671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90003 ] 00:22:33.042 [2024-12-05 09:54:20.634424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.300 [2024-12-05 09:54:20.795604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.867 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.867 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:33.867 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Dd3IU6Mn4R 00:22:34.126 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:34.383 [2024-12-05 09:54:21.758793] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:34.383 TLSTESTn1 00:22:34.383 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:34.383 Running I/O for 10 seconds... 00:22:36.709 3361.00 IOPS, 13.13 MiB/s [2024-12-05T09:54:25.285Z] 3392.00 IOPS, 13.25 MiB/s [2024-12-05T09:54:26.224Z] 3413.33 IOPS, 13.33 MiB/s [2024-12-05T09:54:27.162Z] 3409.75 IOPS, 13.32 MiB/s [2024-12-05T09:54:28.098Z] 3411.20 IOPS, 13.32 MiB/s [2024-12-05T09:54:29.033Z] 3413.67 IOPS, 13.33 MiB/s [2024-12-05T09:54:29.969Z] 3420.71 IOPS, 13.36 MiB/s [2024-12-05T09:54:31.345Z] 3424.00 IOPS, 13.38 MiB/s [2024-12-05T09:54:32.281Z] 3426.78 IOPS, 13.39 MiB/s [2024-12-05T09:54:32.281Z] 3426.90 IOPS, 13.39 MiB/s 00:22:44.652 Latency(us) 00:22:44.652 [2024-12-05T09:54:32.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.652 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:44.652 Verification LBA range: start 0x0 length 0x2000 00:22:44.652 TLSTESTn1 : 10.03 3428.53 13.39 0.00 0.00 37250.80 7685.59 24188.74 00:22:44.652 [2024-12-05T09:54:32.281Z] =================================================================================================================== 00:22:44.652 [2024-12-05T09:54:32.281Z] Total : 3428.53 13.39 0.00 0.00 37250.80 7685.59 24188.74 00:22:44.652 { 00:22:44.652 "results": [ 00:22:44.652 { 00:22:44.652 "job": "TLSTESTn1", 00:22:44.652 "core_mask": "0x4", 00:22:44.652 "workload": "verify", 00:22:44.652 "status": "finished", 00:22:44.652 "verify_range": { 00:22:44.652 "start": 0, 00:22:44.652 "length": 8192 00:22:44.652 }, 00:22:44.652 "queue_depth": 128, 00:22:44.652 "io_size": 4096, 00:22:44.652 "runtime": 10.031987, 00:22:44.652 "iops": 3428.533151009865, 00:22:44.652 "mibps": 13.392707621132285, 00:22:44.652 "io_failed": 0, 00:22:44.652 "io_timeout": 0, 00:22:44.652 "avg_latency_us": 37250.80132048792, 00:22:44.652 "min_latency_us": 7685.585454545455, 00:22:44.652 "max_latency_us": 24188.741818181818 00:22:44.652 } 00:22:44.652 ], 00:22:44.652 "core_count": 1 00:22:44.652 } 00:22:44.652 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:44.652 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 90003 00:22:44.652 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 90003 ']' 00:22:44.652 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 90003 00:22:44.652 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:44.652 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.652 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90003 00:22:44.652 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:44.652 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:44.652 killing process with pid 90003 00:22:44.652 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90003' 00:22:44.652 Received shutdown signal, test time was about 10.000000 seconds 00:22:44.652 00:22:44.652 Latency(us) 00:22:44.652 [2024-12-05T09:54:32.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.652 [2024-12-05T09:54:32.281Z] =================================================================================================================== 00:22:44.652 [2024-12-05T09:54:32.281Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:44.652 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 90003 00:22:44.652 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 90003 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Dd3IU6Mn4R 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Dd3IU6Mn4R 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Dd3IU6Mn4R 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Dd3IU6Mn4R 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Dd3IU6Mn4R 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=90176 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 90176 /var/tmp/bdevperf.sock 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 90176 ']' 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.589 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.589 [2024-12-05 09:54:33.129523] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:45.589 [2024-12-05 09:54:33.129686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90176 ] 00:22:45.848 [2024-12-05 09:54:33.310638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.848 [2024-12-05 09:54:33.422655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.415 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.415 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:46.415 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Dd3IU6Mn4R 00:22:46.674 [2024-12-05 09:54:34.197691] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Dd3IU6Mn4R': 0100666 00:22:46.674 [2024-12-05 09:54:34.197766] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:46.674 2024/12/05 09:54:34 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.Dd3IU6Mn4R], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:22:46.674 request: 00:22:46.674 { 00:22:46.674 "method": "keyring_file_add_key", 00:22:46.674 "params": { 00:22:46.674 "name": "key0", 00:22:46.674 "path": "/tmp/tmp.Dd3IU6Mn4R" 00:22:46.674 } 00:22:46.674 } 00:22:46.674 Got JSON-RPC error response 00:22:46.674 GoRPCClient: error on JSON-RPC call 00:22:46.674 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:46.933 [2024-12-05 09:54:34.413898] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:46.933 [2024-12-05 09:54:34.414014] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:46.933 2024/12/05 09:54:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:22:46.933 request: 00:22:46.933 { 00:22:46.933 "method": "bdev_nvme_attach_controller", 00:22:46.933 "params": { 00:22:46.933 "name": "TLSTEST", 00:22:46.933 "trtype": "tcp", 00:22:46.933 "traddr": "10.0.0.3", 00:22:46.933 "adrfam": "ipv4", 00:22:46.933 "trsvcid": "4420", 00:22:46.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.933 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:46.933 "prchk_reftag": false, 00:22:46.933 "prchk_guard": false, 00:22:46.933 "hdgst": false, 00:22:46.933 "ddgst": false, 00:22:46.933 "psk": "key0", 00:22:46.933 "allow_unrecognized_csi": false 00:22:46.933 } 00:22:46.933 } 00:22:46.933 Got JSON-RPC error response 00:22:46.933 GoRPCClient: error on JSON-RPC call 00:22:46.933 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 90176 00:22:46.933 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 90176 ']' 00:22:46.933 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 90176 00:22:46.933 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:46.933 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.933 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90176 00:22:46.933 killing process with pid 90176 00:22:46.933 Received shutdown signal, test time was about 10.000000 seconds 00:22:46.933 00:22:46.933 Latency(us) 00:22:46.933 [2024-12-05T09:54:34.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.933 [2024-12-05T09:54:34.562Z] =================================================================================================================== 00:22:46.933 [2024-12-05T09:54:34.562Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:46.933 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:46.933 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:46.933 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90176' 00:22:46.933 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 90176 00:22:46.933 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 90176 00:22:47.905 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:47.905 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:47.905 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:47.905 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:47.905 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:47.905 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 89894 00:22:47.905 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 89894 ']' 00:22:47.905 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 89894 00:22:47.905 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:47.905 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.905 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89894 00:22:47.905 killing process with pid 89894 00:22:47.905 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:47.905 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:47.905 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89894' 00:22:47.905 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 89894 00:22:47.905 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 89894 00:22:48.843 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:48.843 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:48.843 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:48.843 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.843 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=90251 00:22:48.843 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:48.843 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 90251 00:22:48.843 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 90251 ']' 00:22:48.843 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.843 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.843 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.843 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.843 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.101 [2024-12-05 09:54:36.551344] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:49.101 [2024-12-05 09:54:36.551524] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.359 [2024-12-05 09:54:36.736790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.359 [2024-12-05 09:54:36.834954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.359 [2024-12-05 09:54:36.835017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.359 [2024-12-05 09:54:36.835034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.359 [2024-12-05 09:54:36.835059] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.359 [2024-12-05 09:54:36.835072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.359 [2024-12-05 09:54:36.836162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:49.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:49.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:49.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Dd3IU6Mn4R 00:22:49.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:49.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Dd3IU6Mn4R 00:22:49.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:22:49.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:49.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:22:49.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:49.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.Dd3IU6Mn4R 00:22:49.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Dd3IU6Mn4R 00:22:49.927 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:50.186 [2024-12-05 09:54:37.791656] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.186 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:50.445 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:22:50.703 [2024-12-05 09:54:38.231759] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:50.703 [2024-12-05 09:54:38.232068] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:50.703 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:50.988 malloc0 00:22:50.988 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:51.247 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Dd3IU6Mn4R 00:22:51.505 [2024-12-05 09:54:39.059265] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Dd3IU6Mn4R': 0100666 00:22:51.505 [2024-12-05 09:54:39.059337] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:51.505 2024/12/05 09:54:39 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.Dd3IU6Mn4R], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:22:51.505 request: 00:22:51.505 { 00:22:51.505 "method": "keyring_file_add_key", 00:22:51.505 "params": { 00:22:51.505 "name": "key0", 00:22:51.505 "path": "/tmp/tmp.Dd3IU6Mn4R" 00:22:51.506 } 00:22:51.506 } 00:22:51.506 Got JSON-RPC error response 00:22:51.506 GoRPCClient: error on JSON-RPC call 00:22:51.506 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:51.764 [2024-12-05 09:54:39.355372] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:22:51.764 [2024-12-05 09:54:39.355432] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:51.764 2024/12/05 09:54:39 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:22:51.764 request: 00:22:51.764 { 00:22:51.764 "method": "nvmf_subsystem_add_host", 00:22:51.764 "params": { 00:22:51.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.764 "host": "nqn.2016-06.io.spdk:host1", 00:22:51.764 "psk": "key0" 00:22:51.764 } 00:22:51.764 } 00:22:51.764 Got JSON-RPC error response 00:22:51.764 GoRPCClient: error on JSON-RPC call 00:22:51.764 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:51.764 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:51.764 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:51.764 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:51.764 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 90251 00:22:51.764 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 90251 ']' 00:22:51.764 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 90251 00:22:51.764 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:51.764 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.764 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90251 00:22:52.023 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:52.023 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:52.023 killing process with pid 90251 00:22:52.023 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90251' 00:22:52.023 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 90251 00:22:52.023 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 90251 00:22:52.959 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Dd3IU6Mn4R 00:22:52.959 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:22:52.959 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:52.959 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:52.959 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.959 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=90382 00:22:52.959 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 90382 00:22:52.959 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:52.959 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 90382 ']' 00:22:52.959 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.959 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.959 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.959 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.959 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.959 [2024-12-05 09:54:40.495711] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:52.959 [2024-12-05 09:54:40.496265] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.218 [2024-12-05 09:54:40.689168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.218 [2024-12-05 09:54:40.786800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.218 [2024-12-05 09:54:40.786865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.218 [2024-12-05 09:54:40.786883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.218 [2024-12-05 09:54:40.786909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.218 [2024-12-05 09:54:40.786922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.218 [2024-12-05 09:54:40.787953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.785 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.785 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:53.785 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:53.785 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:53.785 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.785 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.785 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Dd3IU6Mn4R 00:22:53.785 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Dd3IU6Mn4R 00:22:53.785 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:54.044 [2024-12-05 09:54:41.662291] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.303 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:54.561 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:22:54.561 [2024-12-05 09:54:42.158422] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:54.561 [2024-12-05 09:54:42.158690] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:54.561 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:55.130 malloc0 00:22:55.130 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:55.130 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Dd3IU6Mn4R 00:22:55.389 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:55.648 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:55.648 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=90492 00:22:55.648 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:55.648 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 90492 /var/tmp/bdevperf.sock 00:22:55.648 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 90492 ']' 00:22:55.648 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.648 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.648 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.648 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.648 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.907 [2024-12-05 09:54:43.287183] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:55.907 [2024-12-05 09:54:43.287326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90492 ] 00:22:55.907 [2024-12-05 09:54:43.463503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.166 [2024-12-05 09:54:43.621702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.734 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.734 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:56.734 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Dd3IU6Mn4R 00:22:56.994 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:56.994 [2024-12-05 09:54:44.580246] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:57.253 TLSTESTn1 00:22:57.253 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:57.513 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:22:57.513 "subsystems": [ 00:22:57.513 { 00:22:57.513 "subsystem": "keyring", 00:22:57.513 "config": [ 00:22:57.513 { 00:22:57.513 "method": "keyring_file_add_key", 00:22:57.513 "params": { 00:22:57.513 "name": "key0", 00:22:57.513 "path": "/tmp/tmp.Dd3IU6Mn4R" 00:22:57.513 } 00:22:57.513 } 00:22:57.513 ] 00:22:57.513 }, 00:22:57.513 { 00:22:57.513 "subsystem": "iobuf", 00:22:57.513 "config": [ 00:22:57.513 { 00:22:57.513 "method": "iobuf_set_options", 00:22:57.513 "params": { 00:22:57.513 "enable_numa": false, 00:22:57.513 "large_bufsize": 135168, 00:22:57.513 "large_pool_count": 1024, 00:22:57.513 "small_bufsize": 8192, 00:22:57.513 "small_pool_count": 8192 00:22:57.513 } 00:22:57.513 } 00:22:57.513 ] 00:22:57.513 }, 00:22:57.513 { 00:22:57.513 "subsystem": "sock", 00:22:57.513 "config": [ 00:22:57.513 { 00:22:57.513 "method": "sock_set_default_impl", 00:22:57.513 "params": { 00:22:57.513 "impl_name": "posix" 00:22:57.513 } 00:22:57.513 }, 00:22:57.513 { 00:22:57.513 "method": "sock_impl_set_options", 00:22:57.513 "params": { 00:22:57.513 "enable_ktls": false, 00:22:57.513 "enable_placement_id": 0, 00:22:57.513 "enable_quickack": false, 00:22:57.513 "enable_recv_pipe": true, 00:22:57.513 "enable_zerocopy_send_client": false, 00:22:57.513 "enable_zerocopy_send_server": true, 00:22:57.513 "impl_name": "ssl", 00:22:57.513 "recv_buf_size": 4096, 00:22:57.513 "send_buf_size": 4096, 00:22:57.513 "tls_version": 0, 00:22:57.513 "zerocopy_threshold": 0 00:22:57.513 } 00:22:57.513 }, 00:22:57.513 { 00:22:57.513 "method": "sock_impl_set_options", 00:22:57.513 "params": { 00:22:57.513 "enable_ktls": false, 00:22:57.513 "enable_placement_id": 0, 00:22:57.513 "enable_quickack": false, 00:22:57.513 "enable_recv_pipe": true, 00:22:57.513 "enable_zerocopy_send_client": false, 00:22:57.513 "enable_zerocopy_send_server": true, 00:22:57.513 "impl_name": "posix", 00:22:57.513 "recv_buf_size": 2097152, 00:22:57.513 "send_buf_size": 2097152, 00:22:57.513 "tls_version": 0, 00:22:57.513 "zerocopy_threshold": 0 00:22:57.513 } 00:22:57.513 } 00:22:57.513 ] 00:22:57.513 }, 00:22:57.513 { 00:22:57.513 "subsystem": "vmd", 00:22:57.513 "config": [] 00:22:57.513 }, 00:22:57.513 { 00:22:57.513 "subsystem": "accel", 00:22:57.513 "config": [ 00:22:57.513 { 00:22:57.513 "method": "accel_set_options", 00:22:57.513 "params": { 00:22:57.513 "buf_count": 2048, 00:22:57.513 "large_cache_size": 16, 00:22:57.513 "sequence_count": 2048, 00:22:57.513 "small_cache_size": 128, 00:22:57.513 "task_count": 2048 00:22:57.513 } 00:22:57.513 } 00:22:57.513 ] 00:22:57.513 }, 00:22:57.513 { 00:22:57.513 "subsystem": "bdev", 00:22:57.513 "config": [ 00:22:57.513 { 00:22:57.513 "method": "bdev_set_options", 00:22:57.513 "params": { 00:22:57.513 "bdev_auto_examine": true, 00:22:57.513 "bdev_io_cache_size": 256, 00:22:57.513 "bdev_io_pool_size": 65535, 00:22:57.513 "iobuf_large_cache_size": 16, 00:22:57.513 "iobuf_small_cache_size": 128 00:22:57.513 } 00:22:57.513 }, 00:22:57.513 { 00:22:57.513 "method": "bdev_raid_set_options", 00:22:57.513 "params": { 00:22:57.513 "process_max_bandwidth_mb_sec": 0, 00:22:57.513 "process_window_size_kb": 1024 00:22:57.513 } 00:22:57.513 }, 00:22:57.513 { 00:22:57.513 "method": "bdev_iscsi_set_options", 00:22:57.513 "params": { 00:22:57.513 "timeout_sec": 30 00:22:57.513 } 00:22:57.513 }, 00:22:57.513 { 00:22:57.513 "method": "bdev_nvme_set_options", 00:22:57.513 "params": { 00:22:57.513 "action_on_timeout": "none", 00:22:57.513 "allow_accel_sequence": false, 00:22:57.514 "arbitration_burst": 0, 00:22:57.514 "bdev_retry_count": 3, 00:22:57.514 "ctrlr_loss_timeout_sec": 0, 00:22:57.514 "delay_cmd_submit": true, 00:22:57.514 "dhchap_dhgroups": [ 00:22:57.514 "null", 00:22:57.514 "ffdhe2048", 00:22:57.514 "ffdhe3072", 00:22:57.514 "ffdhe4096", 00:22:57.514 "ffdhe6144", 00:22:57.514 "ffdhe8192" 00:22:57.514 ], 00:22:57.514 "dhchap_digests": [ 00:22:57.514 "sha256", 00:22:57.514 "sha384", 00:22:57.514 "sha512" 00:22:57.514 ], 00:22:57.514 "disable_auto_failback": false, 00:22:57.514 "fast_io_fail_timeout_sec": 0, 00:22:57.514 "generate_uuids": false, 00:22:57.514 "high_priority_weight": 0, 00:22:57.514 "io_path_stat": false, 00:22:57.514 "io_queue_requests": 0, 00:22:57.514 "keep_alive_timeout_ms": 10000, 00:22:57.514 "low_priority_weight": 0, 00:22:57.514 "medium_priority_weight": 0, 00:22:57.514 "nvme_adminq_poll_period_us": 10000, 00:22:57.514 "nvme_error_stat": false, 00:22:57.514 "nvme_ioq_poll_period_us": 0, 00:22:57.514 "rdma_cm_event_timeout_ms": 0, 00:22:57.514 "rdma_max_cq_size": 0, 00:22:57.514 "rdma_srq_size": 0, 00:22:57.514 "reconnect_delay_sec": 0, 00:22:57.514 "timeout_admin_us": 0, 00:22:57.514 "timeout_us": 0, 00:22:57.514 "transport_ack_timeout": 0, 00:22:57.514 "transport_retry_count": 4, 00:22:57.514 "transport_tos": 0 00:22:57.514 } 00:22:57.514 }, 00:22:57.514 { 00:22:57.514 "method": "bdev_nvme_set_hotplug", 00:22:57.514 "params": { 00:22:57.514 "enable": false, 00:22:57.514 "period_us": 100000 00:22:57.514 } 00:22:57.514 }, 00:22:57.514 { 00:22:57.514 "method": "bdev_malloc_create", 00:22:57.514 "params": { 00:22:57.514 "block_size": 4096, 00:22:57.514 "dif_is_head_of_md": false, 00:22:57.514 "dif_pi_format": 0, 00:22:57.514 "dif_type": 0, 00:22:57.514 "md_size": 0, 00:22:57.514 "name": "malloc0", 00:22:57.514 "num_blocks": 8192, 00:22:57.514 "optimal_io_boundary": 0, 00:22:57.514 "physical_block_size": 4096, 00:22:57.514 "uuid": "32962667-a81b-4500-aefb-fe8e8df000e8" 00:22:57.514 } 00:22:57.514 }, 00:22:57.514 { 00:22:57.514 "method": "bdev_wait_for_examine" 00:22:57.514 } 00:22:57.514 ] 00:22:57.514 }, 00:22:57.514 { 00:22:57.514 "subsystem": "nbd", 00:22:57.514 "config": [] 00:22:57.514 }, 00:22:57.514 { 00:22:57.514 "subsystem": "scheduler", 00:22:57.514 "config": [ 00:22:57.514 { 00:22:57.514 "method": "framework_set_scheduler", 00:22:57.514 "params": { 00:22:57.514 "name": "static" 00:22:57.514 } 00:22:57.514 } 00:22:57.514 ] 00:22:57.514 }, 00:22:57.514 { 00:22:57.514 "subsystem": "nvmf", 00:22:57.514 "config": [ 00:22:57.514 { 00:22:57.514 "method": "nvmf_set_config", 00:22:57.514 "params": { 00:22:57.514 "admin_cmd_passthru": { 00:22:57.514 "identify_ctrlr": false 00:22:57.514 }, 00:22:57.514 "dhchap_dhgroups": [ 00:22:57.514 "null", 00:22:57.514 "ffdhe2048", 00:22:57.514 "ffdhe3072", 00:22:57.514 "ffdhe4096", 00:22:57.514 "ffdhe6144", 00:22:57.514 "ffdhe8192" 00:22:57.514 ], 00:22:57.514 "dhchap_digests": [ 00:22:57.514 "sha256", 00:22:57.514 "sha384", 00:22:57.514 "sha512" 00:22:57.514 ], 00:22:57.514 "discovery_filter": "match_any" 00:22:57.514 } 00:22:57.514 }, 00:22:57.514 { 00:22:57.514 "method": "nvmf_set_max_subsystems", 00:22:57.514 "params": { 00:22:57.514 "max_subsystems": 1024 00:22:57.514 } 00:22:57.514 }, 00:22:57.514 { 00:22:57.514 "method": "nvmf_set_crdt", 00:22:57.514 "params": { 00:22:57.514 "crdt1": 0, 00:22:57.514 "crdt2": 0, 00:22:57.514 "crdt3": 0 00:22:57.514 } 00:22:57.514 }, 00:22:57.514 { 00:22:57.514 "method": "nvmf_create_transport", 00:22:57.514 "params": { 00:22:57.514 "abort_timeout_sec": 1, 00:22:57.514 "ack_timeout": 0, 00:22:57.514 "buf_cache_size": 4294967295, 00:22:57.514 "c2h_success": false, 00:22:57.514 "data_wr_pool_size": 0, 00:22:57.514 "dif_insert_or_strip": false, 00:22:57.514 "in_capsule_data_size": 4096, 00:22:57.514 "io_unit_size": 131072, 00:22:57.514 "max_aq_depth": 128, 00:22:57.514 "max_io_qpairs_per_ctrlr": 127, 00:22:57.514 "max_io_size": 131072, 00:22:57.514 "max_queue_depth": 128, 00:22:57.514 "num_shared_buffers": 511, 00:22:57.514 "sock_priority": 0, 00:22:57.514 "trtype": "TCP", 00:22:57.514 "zcopy": false 00:22:57.514 } 00:22:57.514 }, 00:22:57.514 { 00:22:57.514 "method": "nvmf_create_subsystem", 00:22:57.514 "params": { 00:22:57.514 "allow_any_host": false, 00:22:57.514 "ana_reporting": false, 00:22:57.514 "max_cntlid": 65519, 00:22:57.514 "max_namespaces": 10, 00:22:57.514 "min_cntlid": 1, 00:22:57.514 "model_number": "SPDK bdev Controller", 00:22:57.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.514 "serial_number": "SPDK00000000000001" 00:22:57.514 } 00:22:57.514 }, 00:22:57.514 { 00:22:57.514 "method": "nvmf_subsystem_add_host", 00:22:57.514 "params": { 00:22:57.514 "host": "nqn.2016-06.io.spdk:host1", 00:22:57.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.514 "psk": "key0" 00:22:57.514 } 00:22:57.514 }, 00:22:57.514 { 00:22:57.514 "method": "nvmf_subsystem_add_ns", 00:22:57.514 "params": { 00:22:57.514 "namespace": { 00:22:57.514 "bdev_name": "malloc0", 00:22:57.515 "nguid": "32962667A81B4500AEFBFE8E8DF000E8", 00:22:57.515 "no_auto_visible": false, 00:22:57.515 "nsid": 1, 00:22:57.515 "uuid": "32962667-a81b-4500-aefb-fe8e8df000e8" 00:22:57.515 }, 00:22:57.515 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:57.515 } 00:22:57.515 }, 00:22:57.515 { 00:22:57.515 "method": "nvmf_subsystem_add_listener", 00:22:57.515 "params": { 00:22:57.515 "listen_address": { 00:22:57.515 "adrfam": "IPv4", 00:22:57.515 "traddr": "10.0.0.3", 00:22:57.515 "trsvcid": "4420", 00:22:57.515 "trtype": "TCP" 00:22:57.515 }, 00:22:57.515 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.515 "secure_channel": true 00:22:57.515 } 00:22:57.515 } 00:22:57.515 ] 00:22:57.515 } 00:22:57.515 ] 00:22:57.515 }' 00:22:57.515 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:58.082 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:22:58.082 "subsystems": [ 00:22:58.082 { 00:22:58.082 "subsystem": "keyring", 00:22:58.082 "config": [ 00:22:58.082 { 00:22:58.082 "method": "keyring_file_add_key", 00:22:58.082 "params": { 00:22:58.082 "name": "key0", 00:22:58.082 "path": "/tmp/tmp.Dd3IU6Mn4R" 00:22:58.082 } 00:22:58.082 } 00:22:58.082 ] 00:22:58.082 }, 00:22:58.082 { 00:22:58.082 "subsystem": "iobuf", 00:22:58.082 "config": [ 00:22:58.082 { 00:22:58.082 "method": "iobuf_set_options", 00:22:58.082 "params": { 00:22:58.082 "enable_numa": false, 00:22:58.082 "large_bufsize": 135168, 00:22:58.082 "large_pool_count": 1024, 00:22:58.082 "small_bufsize": 8192, 00:22:58.082 "small_pool_count": 8192 00:22:58.082 } 00:22:58.082 } 00:22:58.082 ] 00:22:58.082 }, 00:22:58.082 { 00:22:58.082 "subsystem": "sock", 00:22:58.082 "config": [ 00:22:58.082 { 00:22:58.082 "method": "sock_set_default_impl", 00:22:58.082 "params": { 00:22:58.082 "impl_name": "posix" 00:22:58.082 } 00:22:58.082 }, 00:22:58.082 { 00:22:58.082 "method": "sock_impl_set_options", 00:22:58.082 "params": { 00:22:58.082 "enable_ktls": false, 00:22:58.082 "enable_placement_id": 0, 00:22:58.082 "enable_quickack": false, 00:22:58.082 "enable_recv_pipe": true, 00:22:58.082 "enable_zerocopy_send_client": false, 00:22:58.082 "enable_zerocopy_send_server": true, 00:22:58.082 "impl_name": "ssl", 00:22:58.082 "recv_buf_size": 4096, 00:22:58.082 "send_buf_size": 4096, 00:22:58.082 "tls_version": 0, 00:22:58.082 "zerocopy_threshold": 0 00:22:58.082 } 00:22:58.082 }, 00:22:58.082 { 00:22:58.082 "method": "sock_impl_set_options", 00:22:58.082 "params": { 00:22:58.082 "enable_ktls": false, 00:22:58.082 "enable_placement_id": 0, 00:22:58.082 "enable_quickack": false, 00:22:58.082 "enable_recv_pipe": true, 00:22:58.082 "enable_zerocopy_send_client": false, 00:22:58.082 "enable_zerocopy_send_server": true, 00:22:58.082 "impl_name": "posix", 00:22:58.082 "recv_buf_size": 2097152, 00:22:58.082 "send_buf_size": 2097152, 00:22:58.082 "tls_version": 0, 00:22:58.082 "zerocopy_threshold": 0 00:22:58.082 } 00:22:58.082 } 00:22:58.082 ] 00:22:58.082 }, 00:22:58.082 { 00:22:58.082 "subsystem": "vmd", 00:22:58.082 "config": [] 00:22:58.082 }, 00:22:58.083 { 00:22:58.083 "subsystem": "accel", 00:22:58.083 "config": [ 00:22:58.083 { 00:22:58.083 "method": "accel_set_options", 00:22:58.083 "params": { 00:22:58.083 "buf_count": 2048, 00:22:58.083 "large_cache_size": 16, 00:22:58.083 "sequence_count": 2048, 00:22:58.083 "small_cache_size": 128, 00:22:58.083 "task_count": 2048 00:22:58.083 } 00:22:58.083 } 00:22:58.083 ] 00:22:58.083 }, 00:22:58.083 { 00:22:58.083 "subsystem": "bdev", 00:22:58.083 "config": [ 00:22:58.083 { 00:22:58.083 "method": "bdev_set_options", 00:22:58.083 "params": { 00:22:58.083 "bdev_auto_examine": true, 00:22:58.083 "bdev_io_cache_size": 256, 00:22:58.083 "bdev_io_pool_size": 65535, 00:22:58.083 "iobuf_large_cache_size": 16, 00:22:58.083 "iobuf_small_cache_size": 128 00:22:58.083 } 00:22:58.083 }, 00:22:58.083 { 00:22:58.083 "method": "bdev_raid_set_options", 00:22:58.083 "params": { 00:22:58.083 "process_max_bandwidth_mb_sec": 0, 00:22:58.083 "process_window_size_kb": 1024 00:22:58.083 } 00:22:58.083 }, 00:22:58.083 { 00:22:58.083 "method": "bdev_iscsi_set_options", 00:22:58.083 "params": { 00:22:58.083 "timeout_sec": 30 00:22:58.083 } 00:22:58.083 }, 00:22:58.083 { 00:22:58.083 "method": "bdev_nvme_set_options", 00:22:58.083 "params": { 00:22:58.083 "action_on_timeout": "none", 00:22:58.083 "allow_accel_sequence": false, 00:22:58.083 "arbitration_burst": 0, 00:22:58.083 "bdev_retry_count": 3, 00:22:58.083 "ctrlr_loss_timeout_sec": 0, 00:22:58.083 "delay_cmd_submit": true, 00:22:58.083 "dhchap_dhgroups": [ 00:22:58.083 "null", 00:22:58.083 "ffdhe2048", 00:22:58.083 "ffdhe3072", 00:22:58.083 "ffdhe4096", 00:22:58.083 "ffdhe6144", 00:22:58.083 "ffdhe8192" 00:22:58.083 ], 00:22:58.083 "dhchap_digests": [ 00:22:58.083 "sha256", 00:22:58.083 "sha384", 00:22:58.083 "sha512" 00:22:58.083 ], 00:22:58.083 "disable_auto_failback": false, 00:22:58.083 "fast_io_fail_timeout_sec": 0, 00:22:58.083 "generate_uuids": false, 00:22:58.083 "high_priority_weight": 0, 00:22:58.083 "io_path_stat": false, 00:22:58.083 "io_queue_requests": 512, 00:22:58.083 "keep_alive_timeout_ms": 10000, 00:22:58.083 "low_priority_weight": 0, 00:22:58.083 "medium_priority_weight": 0, 00:22:58.083 "nvme_adminq_poll_period_us": 10000, 00:22:58.083 "nvme_error_stat": false, 00:22:58.083 "nvme_ioq_poll_period_us": 0, 00:22:58.083 "rdma_cm_event_timeout_ms": 0, 00:22:58.083 "rdma_max_cq_size": 0, 00:22:58.083 "rdma_srq_size": 0, 00:22:58.083 "reconnect_delay_sec": 0, 00:22:58.083 "timeout_admin_us": 0, 00:22:58.083 "timeout_us": 0, 00:22:58.083 "transport_ack_timeout": 0, 00:22:58.083 "transport_retry_count": 4, 00:22:58.083 "transport_tos": 0 00:22:58.083 } 00:22:58.083 }, 00:22:58.083 { 00:22:58.083 "method": "bdev_nvme_attach_controller", 00:22:58.083 "params": { 00:22:58.083 "adrfam": "IPv4", 00:22:58.083 "ctrlr_loss_timeout_sec": 0, 00:22:58.083 "ddgst": false, 00:22:58.083 "fast_io_fail_timeout_sec": 0, 00:22:58.083 "hdgst": false, 00:22:58.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:58.083 "multipath": "multipath", 00:22:58.083 "name": "TLSTEST", 00:22:58.083 "prchk_guard": false, 00:22:58.083 "prchk_reftag": false, 00:22:58.083 "psk": "key0", 00:22:58.083 "reconnect_delay_sec": 0, 00:22:58.083 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.083 "traddr": "10.0.0.3", 00:22:58.083 "trsvcid": "4420", 00:22:58.083 "trtype": "TCP" 00:22:58.083 } 00:22:58.083 }, 00:22:58.083 { 00:22:58.083 "method": "bdev_nvme_set_hotplug", 00:22:58.083 "params": { 00:22:58.083 "enable": false, 00:22:58.083 "period_us": 100000 00:22:58.083 } 00:22:58.083 }, 00:22:58.083 { 00:22:58.083 "method": "bdev_wait_for_examine" 00:22:58.083 } 00:22:58.083 ] 00:22:58.083 }, 00:22:58.083 { 00:22:58.083 "subsystem": "nbd", 00:22:58.083 "config": [] 00:22:58.083 } 00:22:58.083 ] 00:22:58.083 }' 00:22:58.083 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 90492 00:22:58.083 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 90492 ']' 00:22:58.083 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 90492 00:22:58.083 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:58.083 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:58.083 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90492 00:22:58.083 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:58.083 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:58.083 killing process with pid 90492 00:22:58.083 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90492' 00:22:58.083 Received shutdown signal, test time was about 10.000000 seconds 00:22:58.083 00:22:58.083 Latency(us) 00:22:58.083 [2024-12-05T09:54:45.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.083 [2024-12-05T09:54:45.712Z] =================================================================================================================== 00:22:58.083 [2024-12-05T09:54:45.712Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:58.083 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 90492 00:22:58.083 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 90492 00:22:59.019 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 90382 00:22:59.019 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 90382 ']' 00:22:59.019 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 90382 00:22:59.019 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:59.019 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.019 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90382 00:22:59.019 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:59.019 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:59.019 killing process with pid 90382 00:22:59.019 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90382' 00:22:59.019 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 90382 00:22:59.019 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 90382 00:22:59.955 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:59.955 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:59.955 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:59.955 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.955 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:22:59.955 "subsystems": [ 00:22:59.955 { 00:22:59.955 "subsystem": "keyring", 00:22:59.955 "config": [ 00:22:59.955 { 00:22:59.955 "method": "keyring_file_add_key", 00:22:59.955 "params": { 00:22:59.955 "name": "key0", 00:22:59.955 "path": "/tmp/tmp.Dd3IU6Mn4R" 00:22:59.955 } 00:22:59.955 } 00:22:59.955 ] 00:22:59.955 }, 00:22:59.955 { 00:22:59.955 "subsystem": "iobuf", 00:22:59.955 "config": [ 00:22:59.955 { 00:22:59.955 "method": "iobuf_set_options", 00:22:59.955 "params": { 00:22:59.955 "enable_numa": false, 00:22:59.955 "large_bufsize": 135168, 00:22:59.955 "large_pool_count": 1024, 00:22:59.955 "small_bufsize": 8192, 00:22:59.955 "small_pool_count": 8192 00:22:59.955 } 00:22:59.955 } 00:22:59.955 ] 00:22:59.955 }, 00:22:59.955 { 00:22:59.955 "subsystem": "sock", 00:22:59.955 "config": [ 00:22:59.955 { 00:22:59.955 "method": "sock_set_default_impl", 00:22:59.955 "params": { 00:22:59.955 "impl_name": "posix" 00:22:59.955 } 00:22:59.955 }, 00:22:59.955 { 00:22:59.955 "method": "sock_impl_set_options", 00:22:59.955 "params": { 00:22:59.955 "enable_ktls": false, 00:22:59.955 "enable_placement_id": 0, 00:22:59.955 "enable_quickack": false, 00:22:59.955 "enable_recv_pipe": true, 00:22:59.955 "enable_zerocopy_send_client": false, 00:22:59.955 "enable_zerocopy_send_server": true, 00:22:59.955 "impl_name": "ssl", 00:22:59.955 "recv_buf_size": 4096, 00:22:59.955 "send_buf_size": 4096, 00:22:59.955 "tls_version": 0, 00:22:59.955 "zerocopy_threshold": 0 00:22:59.955 } 00:22:59.955 }, 00:22:59.955 { 00:22:59.955 "method": "sock_impl_set_options", 00:22:59.955 "params": { 00:22:59.955 "enable_ktls": false, 00:22:59.955 "enable_placement_id": 0, 00:22:59.955 "enable_quickack": false, 00:22:59.955 "enable_recv_pipe": true, 00:22:59.955 "enable_zerocopy_send_client": false, 00:22:59.955 "enable_zerocopy_send_server": true, 00:22:59.955 "impl_name": "posix", 00:22:59.955 "recv_buf_size": 2097152, 00:22:59.955 "send_buf_size": 2097152, 00:22:59.955 "tls_version": 0, 00:22:59.955 "zerocopy_threshold": 0 00:22:59.955 } 00:22:59.955 } 00:22:59.955 ] 00:22:59.955 }, 00:22:59.955 { 00:22:59.955 "subsystem": "vmd", 00:22:59.955 "config": [] 00:22:59.955 }, 00:22:59.955 { 00:22:59.955 "subsystem": "accel", 00:22:59.955 "config": [ 00:22:59.955 { 00:22:59.955 "method": "accel_set_options", 00:22:59.955 "params": { 00:22:59.955 "buf_count": 2048, 00:22:59.955 "large_cache_size": 16, 00:22:59.955 "sequence_count": 2048, 00:22:59.955 "small_cache_size": 128, 00:22:59.955 "task_count": 2048 00:22:59.955 } 00:22:59.955 } 00:22:59.955 ] 00:22:59.955 }, 00:22:59.955 { 00:22:59.955 "subsystem": "bdev", 00:22:59.955 "config": [ 00:22:59.955 { 00:22:59.955 "method": "bdev_set_options", 00:22:59.955 "params": { 00:22:59.955 "bdev_auto_examine": true, 00:22:59.955 "bdev_io_cache_size": 256, 00:22:59.955 "bdev_io_pool_size": 65535, 00:22:59.955 "iobuf_large_cache_size": 16, 00:22:59.955 "iobuf_small_cache_size": 128 00:22:59.955 } 00:22:59.955 }, 00:22:59.955 { 00:22:59.955 "method": "bdev_raid_set_options", 00:22:59.955 "params": { 00:22:59.955 "process_max_bandwidth_mb_sec": 0, 00:22:59.955 "process_window_size_kb": 1024 00:22:59.955 } 00:22:59.955 }, 00:22:59.955 { 00:22:59.955 "method": "bdev_iscsi_set_options", 00:22:59.955 "params": { 00:22:59.955 "timeout_sec": 30 00:22:59.955 } 00:22:59.955 }, 00:22:59.955 { 00:22:59.955 "method": "bdev_nvme_set_options", 00:22:59.955 "params": { 00:22:59.955 "action_on_timeout": "none", 00:22:59.955 "allow_accel_sequence": false, 00:22:59.955 "arbitration_burst": 0, 00:22:59.955 "bdev_retry_count": 3, 00:22:59.955 "ctrlr_loss_timeout_sec": 0, 00:22:59.955 "delay_cmd_submit": true, 00:22:59.955 "dhchap_dhgroups": [ 00:22:59.955 "null", 00:22:59.955 "ffdhe2048", 00:22:59.955 "ffdhe3072", 00:22:59.955 "ffdhe4096", 00:22:59.955 "ffdhe6144", 00:22:59.955 "ffdhe8192" 00:22:59.955 ], 00:22:59.955 "dhchap_digests": [ 00:22:59.955 "sha256", 00:22:59.955 "sha384", 00:22:59.955 "sha512" 00:22:59.955 ], 00:22:59.955 "disable_auto_failback": false, 00:22:59.955 "fast_io_fail_timeout_sec": 0, 00:22:59.955 "generate_uuids": false, 00:22:59.955 "high_priority_weight": 0, 00:22:59.955 "io_path_stat": false, 00:22:59.955 "io_queue_requests": 0, 00:22:59.955 "keep_alive_timeout_ms": 10000, 00:22:59.955 "low_priority_weight": 0, 00:22:59.955 "medium_priority_weight": 0, 00:22:59.955 "nvme_adminq_poll_period_us": 10000, 00:22:59.955 "nvme_error_stat": false, 00:22:59.955 "nvme_ioq_poll_period_us": 0, 00:22:59.955 "rdma_cm_event_timeout_ms": 0, 00:22:59.955 "rdma_max_cq_size": 0, 00:22:59.955 "rdma_srq_size": 0, 00:22:59.955 "reconnect_delay_sec": 0, 00:22:59.955 "timeout_admin_us": 0, 00:22:59.955 "timeout_us": 0, 00:22:59.955 "transport_ack_timeout": 0, 00:22:59.955 "transport_retry_count": 4, 00:22:59.955 "transport_tos": 0 00:22:59.955 } 00:22:59.955 }, 00:22:59.955 { 00:22:59.955 "method": "bdev_nvme_set_hotplug", 00:22:59.955 "params": { 00:22:59.955 "enable": false, 00:22:59.955 "period_us": 100000 00:22:59.955 } 00:22:59.955 }, 00:22:59.955 { 00:22:59.955 "method": "bdev_malloc_create", 00:22:59.955 "params": { 00:22:59.955 "block_size": 4096, 00:22:59.955 "dif_is_head_of_md": false, 00:22:59.955 "dif_pi_format": 0, 00:22:59.955 "dif_type": 0, 00:22:59.955 "md_size": 0, 00:22:59.955 "name": "malloc0", 00:22:59.955 "num_blocks": 8192, 00:22:59.955 "optimal_io_boundary": 0, 00:22:59.955 "physical_block_size": 4096, 00:22:59.955 "uuid": "32962667-a81b-4500-aefb-fe8e8df000e8" 00:22:59.955 } 00:22:59.956 }, 00:22:59.956 { 00:22:59.956 "method": "bdev_wait_for_examine" 00:22:59.956 } 00:22:59.956 ] 00:22:59.956 }, 00:22:59.956 { 00:22:59.956 "subsystem": "nbd", 00:22:59.956 "config": [] 00:22:59.956 }, 00:22:59.956 { 00:22:59.956 "subsystem": "scheduler", 00:22:59.956 "config": [ 00:22:59.956 { 00:22:59.956 "method": "framework_set_scheduler", 00:22:59.956 "params": { 00:22:59.956 "name": "static" 00:22:59.956 } 00:22:59.956 } 00:22:59.956 ] 00:22:59.956 }, 00:22:59.956 { 00:22:59.956 "subsystem": "nvmf", 00:22:59.956 "config": [ 00:22:59.956 { 00:22:59.956 "method": "nvmf_set_config", 00:22:59.956 "params": { 00:22:59.956 "admin_cmd_passthru": { 00:22:59.956 "identify_ctrlr": false 00:22:59.956 }, 00:22:59.956 "dhchap_dhgroups": [ 00:22:59.956 "null", 00:22:59.956 "ffdhe2048", 00:22:59.956 "ffdhe3072", 00:22:59.956 "ffdhe4096", 00:22:59.956 "ffdhe6144", 00:22:59.956 "ffdhe8192" 00:22:59.956 ], 00:22:59.956 "dhchap_digests": [ 00:22:59.956 "sha256", 00:22:59.956 "sha384", 00:22:59.956 "sha512" 00:22:59.956 ], 00:22:59.956 "discovery_filter": "match_any" 00:22:59.956 } 00:22:59.956 }, 00:22:59.956 { 00:22:59.956 "method": "nvmf_set_max_subsystems", 00:22:59.956 "params": { 00:22:59.956 "max_subsystems": 1024 00:22:59.956 } 00:22:59.956 }, 00:22:59.956 { 00:22:59.956 "method": "nvmf_set_crdt", 00:22:59.956 "params": { 00:22:59.956 "crdt1": 0, 00:22:59.956 "crdt2": 0, 00:22:59.956 "crdt3": 0 00:22:59.956 } 00:22:59.956 }, 00:22:59.956 { 00:22:59.956 "method": "nvmf_create_transport", 00:22:59.956 "params": { 00:22:59.956 "abort_timeout_sec": 1, 00:22:59.956 "ack_timeout": 0, 00:22:59.956 "buf_cache_size": 4294967295, 00:22:59.956 "c2h_success": false, 00:22:59.956 "data_wr_pool_size": 0, 00:22:59.956 "dif_insert_or_strip": false, 00:22:59.956 "in_capsule_data_size": 4096, 00:22:59.956 "io_unit_size": 131072, 00:22:59.956 "max_aq_depth": 128, 00:22:59.956 "max_io_qpairs_per_ctrlr": 127, 00:22:59.956 "max_io_size": 131072, 00:22:59.956 "max_queue_depth": 128, 00:22:59.956 "num_shared_buffers": 511, 00:22:59.956 "sock_priority": 0, 00:22:59.956 "trtype": "TCP", 00:22:59.956 "zcopy": false 00:22:59.956 } 00:22:59.956 }, 00:22:59.956 { 00:22:59.956 "method": "nvmf_create_subsystem", 00:22:59.956 "params": { 00:22:59.956 "allow_any_host": false, 00:22:59.956 "ana_reporting": false, 00:22:59.956 "max_cntlid": 65519, 00:22:59.956 "max_namespaces": 10, 00:22:59.956 "min_cntlid": 1, 00:22:59.956 "model_number": "SPDK bdev Controller", 00:22:59.956 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.956 "serial_number": "SPDK00000000000001" 00:22:59.956 } 00:22:59.956 }, 00:22:59.956 { 00:22:59.956 "method": "nvmf_subsystem_add_host", 00:22:59.956 "params": { 00:22:59.956 "host": "nqn.2016-06.io.spdk:host1", 00:22:59.956 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.956 "psk": "key0" 00:22:59.956 } 00:22:59.956 }, 00:22:59.956 { 00:22:59.956 "method": "nvmf_subsystem_add_ns", 00:22:59.956 "params": { 00:22:59.956 "namespace": { 00:22:59.956 "bdev_name": "malloc0", 00:22:59.956 "nguid": "32962667A81B4500AEFBFE8E8DF000E8", 00:22:59.956 "no_auto_visible": false, 00:22:59.956 "nsid": 1, 00:22:59.956 "uuid": "32962667-a81b-4500-aefb-fe8e8df000e8" 00:22:59.956 }, 00:22:59.956 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:59.956 } 00:22:59.956 }, 00:22:59.956 { 00:22:59.956 "method": "nvmf_subsystem_add_listener", 00:22:59.956 "params": { 00:22:59.956 "listen_address": { 00:22:59.956 "adrfam": "IPv4", 00:22:59.956 "traddr": "10.0.0.3", 00:22:59.956 "trsvcid": "4420", 00:22:59.956 "trtype": "TCP" 00:22:59.956 }, 00:22:59.956 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.956 "secure_channel": true 00:22:59.956 } 00:22:59.956 } 00:22:59.956 ] 00:22:59.956 } 00:22:59.956 ] 00:22:59.956 }' 00:22:59.956 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=90595 00:22:59.956 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:59.956 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 90595 00:22:59.956 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 90595 ']' 00:22:59.956 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.956 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.956 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.956 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.956 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.956 [2024-12-05 09:54:47.479856] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:59.956 [2024-12-05 09:54:47.480052] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.214 [2024-12-05 09:54:47.653914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.214 [2024-12-05 09:54:47.753019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.214 [2024-12-05 09:54:47.753086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.214 [2024-12-05 09:54:47.753121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.214 [2024-12-05 09:54:47.753147] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.214 [2024-12-05 09:54:47.753161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.214 [2024-12-05 09:54:47.754447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.779 [2024-12-05 09:54:48.201001] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.779 [2024-12-05 09:54:48.232921] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:00.779 [2024-12-05 09:54:48.233281] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:00.779 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.779 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:00.779 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:00.779 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:00.779 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.779 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.779 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=90635 00:23:00.779 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 90635 /var/tmp/bdevperf.sock 00:23:00.779 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 90635 ']' 00:23:00.779 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.779 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.779 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.779 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.780 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.780 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:00.780 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:00.780 "subsystems": [ 00:23:00.780 { 00:23:00.780 "subsystem": "keyring", 00:23:00.780 "config": [ 00:23:00.780 { 00:23:00.780 "method": "keyring_file_add_key", 00:23:00.780 "params": { 00:23:00.780 "name": "key0", 00:23:00.780 "path": "/tmp/tmp.Dd3IU6Mn4R" 00:23:00.780 } 00:23:00.780 } 00:23:00.780 ] 00:23:00.780 }, 00:23:00.780 { 00:23:00.780 "subsystem": "iobuf", 00:23:00.780 "config": [ 00:23:00.780 { 00:23:00.780 "method": "iobuf_set_options", 00:23:00.780 "params": { 00:23:00.780 "enable_numa": false, 00:23:00.780 "large_bufsize": 135168, 00:23:00.780 "large_pool_count": 1024, 00:23:00.780 "small_bufsize": 8192, 00:23:00.780 "small_pool_count": 8192 00:23:00.780 } 00:23:00.780 } 00:23:00.780 ] 00:23:00.780 }, 00:23:00.780 { 00:23:00.780 "subsystem": "sock", 00:23:00.780 "config": [ 00:23:00.780 { 00:23:00.780 "method": "sock_set_default_impl", 00:23:00.780 "params": { 00:23:00.780 "impl_name": "posix" 00:23:00.780 } 00:23:00.780 }, 00:23:00.780 { 00:23:00.780 "method": "sock_impl_set_options", 00:23:00.780 "params": { 00:23:00.780 "enable_ktls": false, 00:23:00.780 "enable_placement_id": 0, 00:23:00.780 "enable_quickack": false, 00:23:00.780 "enable_recv_pipe": true, 00:23:00.780 "enable_zerocopy_send_client": false, 00:23:00.780 "enable_zerocopy_send_server": true, 00:23:00.780 "impl_name": "ssl", 00:23:00.780 "recv_buf_size": 4096, 00:23:00.780 "send_buf_size": 4096, 00:23:00.780 "tls_version": 0, 00:23:00.780 "zerocopy_threshold": 0 00:23:00.780 } 00:23:00.780 }, 00:23:00.780 { 00:23:00.780 "method": "sock_impl_set_options", 00:23:00.780 "params": { 00:23:00.780 "enable_ktls": false, 00:23:00.780 "enable_placement_id": 0, 00:23:00.780 "enable_quickack": false, 00:23:00.780 "enable_recv_pipe": true, 00:23:00.780 "enable_zerocopy_send_client": false, 00:23:00.780 "enable_zerocopy_send_server": true, 00:23:00.780 "impl_name": "posix", 00:23:00.780 "recv_buf_size": 2097152, 00:23:00.780 "send_buf_size": 2097152, 00:23:00.780 "tls_version": 0, 00:23:00.780 "zerocopy_threshold": 0 00:23:00.780 } 00:23:00.780 } 00:23:00.780 ] 00:23:00.780 }, 00:23:00.780 { 00:23:00.780 "subsystem": "vmd", 00:23:00.780 "config": [] 00:23:00.780 }, 00:23:00.780 { 00:23:00.780 "subsystem": "accel", 00:23:00.780 "config": [ 00:23:00.780 { 00:23:00.780 "method": "accel_set_options", 00:23:00.780 "params": { 00:23:00.780 "buf_count": 2048, 00:23:00.780 "large_cache_size": 16, 00:23:00.780 "sequence_count": 2048, 00:23:00.780 "small_cache_size": 128, 00:23:00.780 "task_count": 2048 00:23:00.780 } 00:23:00.780 } 00:23:00.780 ] 00:23:00.780 }, 00:23:00.780 { 00:23:00.780 "subsystem": "bdev", 00:23:00.780 "config": [ 00:23:00.780 { 00:23:00.780 "method": "bdev_set_options", 00:23:00.780 "params": { 00:23:00.780 "bdev_auto_examine": true, 00:23:00.780 "bdev_io_cache_size": 256, 00:23:00.780 "bdev_io_pool_size": 65535, 00:23:00.780 "iobuf_large_cache_size": 16, 00:23:00.780 "iobuf_small_cache_size": 128 00:23:00.780 } 00:23:00.780 }, 00:23:00.780 { 00:23:00.780 "method": "bdev_raid_set_options", 00:23:00.780 "params": { 00:23:00.780 "process_max_bandwidth_mb_sec": 0, 00:23:00.780 "process_window_size_kb": 1024 00:23:00.780 } 00:23:00.780 }, 00:23:00.780 { 00:23:00.780 "method": "bdev_iscsi_set_options", 00:23:00.780 "params": { 00:23:00.780 "timeout_sec": 30 00:23:00.780 } 00:23:00.780 }, 00:23:00.780 { 00:23:00.780 "method": "bdev_nvme_set_options", 00:23:00.780 "params": { 00:23:00.780 "action_on_timeout": "none", 00:23:00.780 "allow_accel_sequence": false, 00:23:00.780 "arbitration_burst": 0, 00:23:00.780 "bdev_retry_count": 3, 00:23:00.780 "ctrlr_loss_timeout_sec": 0, 00:23:00.780 "delay_cmd_submit": true, 00:23:00.780 "dhchap_dhgroups": [ 00:23:00.780 "null", 00:23:00.780 "ffdhe2048", 00:23:00.780 "ffdhe3072", 00:23:00.780 "ffdhe4096", 00:23:00.780 "ffdhe6144", 00:23:00.780 "ffdhe8192" 00:23:00.780 ], 00:23:00.780 "dhchap_digests": [ 00:23:00.780 "sha256", 00:23:00.780 "sha384", 00:23:00.780 "sha512" 00:23:00.780 ], 00:23:00.780 "disable_auto_failback": false, 00:23:00.780 "fast_io_fail_timeout_sec": 0, 00:23:00.780 "generate_uuids": false, 00:23:00.780 "high_priority_weight": 0, 00:23:00.780 "io_path_stat": false, 00:23:00.780 "io_queue_requests": 512, 00:23:00.780 "keep_alive_timeout_ms": 10000, 00:23:00.780 "low_priority_weight": 0, 00:23:00.780 "medium_priority_weight": 0, 00:23:00.780 "nvme_adminq_poll_period_us": 10000, 00:23:00.780 "nvme_error_stat": false, 00:23:00.780 "nvme_ioq_poll_period_us": 0, 00:23:00.780 "rdma_cm_event_timeout_ms": 0, 00:23:00.780 "rdma_max_cq_size": 0, 00:23:00.780 "rdma_srq_size": 0, 00:23:00.780 "reconnect_delay_sec": 0, 00:23:00.780 "timeout_admin_us": 0, 00:23:00.780 "timeout_us": 0, 00:23:00.780 "transport_ack_timeout": 0, 00:23:00.780 "transport_retry_count": 4, 00:23:00.780 "transport_tos": 0 00:23:00.780 } 00:23:00.780 }, 00:23:00.780 { 00:23:00.780 "method": "bdev_nvme_attach_controller", 00:23:00.780 "params": { 00:23:00.780 "adrfam": "IPv4", 00:23:00.780 "ctrlr_loss_timeout_sec": 0, 00:23:00.780 "ddgst": false, 00:23:00.780 "fast_io_fail_timeout_sec": 0, 00:23:00.780 "hdgst": false, 00:23:00.780 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.780 "multipath": "multipath", 00:23:00.780 "name": "TLSTEST", 00:23:00.780 "prchk_guard": false, 00:23:00.780 "prchk_reftag": false, 00:23:00.780 "psk": "key0", 00:23:00.780 "reconnect_delay_sec": 0, 00:23:00.780 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.780 "traddr": "10.0.0.3", 00:23:00.780 "trsvcid": "4420", 00:23:00.780 "trtype": "TCP" 00:23:00.780 } 00:23:00.780 }, 00:23:00.780 { 00:23:00.780 "method": "bdev_nvme_set_hotplug", 00:23:00.780 "params": { 00:23:00.780 "enable": false, 00:23:00.780 "period_us": 100000 00:23:00.780 } 00:23:00.780 }, 00:23:00.780 { 00:23:00.780 "method": "bdev_wait_for_examine" 00:23:00.780 } 00:23:00.780 ] 00:23:00.780 }, 00:23:00.780 { 00:23:00.780 "subsystem": "nbd", 00:23:00.780 "config": [] 00:23:00.780 } 00:23:00.780 ] 00:23:00.780 }' 00:23:01.040 [2024-12-05 09:54:48.468285] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:23:01.040 [2024-12-05 09:54:48.468491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90635 ] 00:23:01.040 [2024-12-05 09:54:48.655475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.298 [2024-12-05 09:54:48.802972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.557 [2024-12-05 09:54:49.167011] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:01.817 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:01.817 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:01.817 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:02.076 Running I/O for 10 seconds... 00:23:03.945 3284.00 IOPS, 12.83 MiB/s [2024-12-05T09:54:52.509Z] 3284.50 IOPS, 12.83 MiB/s [2024-12-05T09:54:53.886Z] 3287.67 IOPS, 12.84 MiB/s [2024-12-05T09:54:54.453Z] 3296.00 IOPS, 12.88 MiB/s [2024-12-05T09:54:55.828Z] 3302.40 IOPS, 12.90 MiB/s [2024-12-05T09:54:56.763Z] 3308.00 IOPS, 12.92 MiB/s [2024-12-05T09:54:57.697Z] 3310.57 IOPS, 12.93 MiB/s [2024-12-05T09:54:58.631Z] 3312.00 IOPS, 12.94 MiB/s [2024-12-05T09:54:59.564Z] 3313.78 IOPS, 12.94 MiB/s [2024-12-05T09:54:59.564Z] 3315.20 IOPS, 12.95 MiB/s 00:23:11.935 Latency(us) 00:23:11.935 [2024-12-05T09:54:59.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.935 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:11.935 Verification LBA range: start 0x0 length 0x2000 00:23:11.935 TLSTESTn1 : 10.03 3317.80 12.96 0.00 0.00 38503.21 9353.77 26571.87 00:23:11.935 [2024-12-05T09:54:59.564Z] =================================================================================================================== 00:23:11.935 [2024-12-05T09:54:59.564Z] Total : 3317.80 12.96 0.00 0.00 38503.21 9353.77 26571.87 00:23:11.935 { 00:23:11.935 "results": [ 00:23:11.935 { 00:23:11.935 "job": "TLSTESTn1", 00:23:11.935 "core_mask": "0x4", 00:23:11.935 "workload": "verify", 00:23:11.935 "status": "finished", 00:23:11.935 "verify_range": { 00:23:11.935 "start": 0, 00:23:11.935 "length": 8192 00:23:11.935 }, 00:23:11.935 "queue_depth": 128, 00:23:11.935 "io_size": 4096, 00:23:11.935 "runtime": 10.030758, 00:23:11.935 "iops": 3317.7951257522113, 00:23:11.935 "mibps": 12.960137209969576, 00:23:11.935 "io_failed": 0, 00:23:11.935 "io_timeout": 0, 00:23:11.935 "avg_latency_us": 38503.20738461539, 00:23:11.935 "min_latency_us": 9353.774545454546, 00:23:11.935 "max_latency_us": 26571.86909090909 00:23:11.935 } 00:23:11.935 ], 00:23:11.935 "core_count": 1 00:23:11.935 } 00:23:11.935 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:11.935 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 90635 00:23:11.935 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 90635 ']' 00:23:11.935 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 90635 00:23:11.935 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:11.935 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.935 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90635 00:23:11.935 killing process with pid 90635 00:23:11.935 Received shutdown signal, test time was about 10.000000 seconds 00:23:11.935 00:23:11.935 Latency(us) 00:23:11.935 [2024-12-05T09:54:59.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.935 [2024-12-05T09:54:59.564Z] =================================================================================================================== 00:23:11.935 [2024-12-05T09:54:59.564Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:11.935 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:11.935 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:11.935 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90635' 00:23:11.935 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 90635 00:23:11.935 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 90635 00:23:13.311 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 90595 00:23:13.311 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 90595 ']' 00:23:13.311 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 90595 00:23:13.311 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:13.311 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.311 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90595 00:23:13.311 killing process with pid 90595 00:23:13.311 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:13.311 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:13.311 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90595' 00:23:13.311 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 90595 00:23:13.311 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 90595 00:23:14.281 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:14.281 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:14.281 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.281 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.281 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=90799 00:23:14.281 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 90799 00:23:14.281 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 90799 ']' 00:23:14.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.281 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.281 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.281 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.281 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.281 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.281 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:14.281 [2024-12-05 09:55:01.730716] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:23:14.281 [2024-12-05 09:55:01.730923] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.555 [2024-12-05 09:55:01.921652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.555 [2024-12-05 09:55:02.070676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.555 [2024-12-05 09:55:02.070748] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.555 [2024-12-05 09:55:02.070773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.555 [2024-12-05 09:55:02.070806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.555 [2024-12-05 09:55:02.070824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.555 [2024-12-05 09:55:02.072378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.122 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.122 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:15.122 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:15.122 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.122 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.122 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.122 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Dd3IU6Mn4R 00:23:15.122 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Dd3IU6Mn4R 00:23:15.122 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:15.382 [2024-12-05 09:55:02.863452] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.382 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:15.640 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:23:15.899 [2024-12-05 09:55:03.351518] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:15.899 [2024-12-05 09:55:03.351874] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:15.899 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:16.158 malloc0 00:23:16.158 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:16.417 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Dd3IU6Mn4R 00:23:16.676 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.934 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:16.934 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=90908 00:23:16.934 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:16.934 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 90908 /var/tmp/bdevperf.sock 00:23:16.934 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 90908 ']' 00:23:16.934 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.934 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:16.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.934 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.934 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:16.934 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.934 [2024-12-05 09:55:04.507659] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:23:16.934 [2024-12-05 09:55:04.508285] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90908 ] 00:23:17.192 [2024-12-05 09:55:04.667882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.192 [2024-12-05 09:55:04.779077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.132 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.132 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:18.132 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Dd3IU6Mn4R 00:23:18.132 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:18.390 [2024-12-05 09:55:05.903659] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.390 nvme0n1 00:23:18.390 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:18.648 Running I/O for 1 seconds... 00:23:19.583 3456.00 IOPS, 13.50 MiB/s 00:23:19.583 Latency(us) 00:23:19.583 [2024-12-05T09:55:07.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.583 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:19.583 Verification LBA range: start 0x0 length 0x2000 00:23:19.583 nvme0n1 : 1.03 3473.95 13.57 0.00 0.00 36405.13 10485.76 25380.31 00:23:19.583 [2024-12-05T09:55:07.212Z] =================================================================================================================== 00:23:19.583 [2024-12-05T09:55:07.212Z] Total : 3473.95 13.57 0.00 0.00 36405.13 10485.76 25380.31 00:23:19.583 { 00:23:19.583 "results": [ 00:23:19.583 { 00:23:19.583 "job": "nvme0n1", 00:23:19.583 "core_mask": "0x2", 00:23:19.583 "workload": "verify", 00:23:19.583 "status": "finished", 00:23:19.583 "verify_range": { 00:23:19.583 "start": 0, 00:23:19.583 "length": 8192 00:23:19.583 }, 00:23:19.583 "queue_depth": 128, 00:23:19.583 "io_size": 4096, 00:23:19.583 "runtime": 1.031679, 00:23:19.583 "iops": 3473.948776702831, 00:23:19.583 "mibps": 13.570112408995433, 00:23:19.583 "io_failed": 0, 00:23:19.583 "io_timeout": 0, 00:23:19.583 "avg_latency_us": 36405.12831168831, 00:23:19.583 "min_latency_us": 10485.76, 00:23:19.583 "max_latency_us": 25380.305454545454 00:23:19.583 } 00:23:19.583 ], 00:23:19.583 "core_count": 1 00:23:19.583 } 00:23:19.583 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 90908 00:23:19.583 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 90908 ']' 00:23:19.583 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 90908 00:23:19.583 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:19.583 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.583 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90908 00:23:19.842 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:19.842 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:19.842 killing process with pid 90908 00:23:19.842 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90908' 00:23:19.842 Received shutdown signal, test time was about 1.000000 seconds 00:23:19.842 00:23:19.842 Latency(us) 00:23:19.842 [2024-12-05T09:55:07.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.842 [2024-12-05T09:55:07.471Z] =================================================================================================================== 00:23:19.842 [2024-12-05T09:55:07.471Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:19.842 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 90908 00:23:19.842 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 90908 00:23:20.778 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 90799 00:23:20.778 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 90799 ']' 00:23:20.778 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 90799 00:23:20.778 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:20.778 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.778 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90799 00:23:20.778 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:20.778 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:20.778 killing process with pid 90799 00:23:20.778 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90799' 00:23:20.778 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 90799 00:23:20.778 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 90799 00:23:21.714 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:21.714 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:21.714 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.714 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.714 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=90998 00:23:21.714 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 90998 00:23:21.714 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 90998 ']' 00:23:21.714 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:21.714 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.714 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.714 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.714 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.714 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.714 [2024-12-05 09:55:09.262437] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:23:21.714 [2024-12-05 09:55:09.262611] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.972 [2024-12-05 09:55:09.440543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.972 [2024-12-05 09:55:09.544522] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.972 [2024-12-05 09:55:09.544584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.972 [2024-12-05 09:55:09.544617] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.972 [2024-12-05 09:55:09.544642] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.972 [2024-12-05 09:55:09.544655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.972 [2024-12-05 09:55:09.545840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.537 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.537 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:22.537 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:22.537 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:22.537 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.794 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.794 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:22.794 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.794 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.794 [2024-12-05 09:55:10.191457] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.794 malloc0 00:23:22.794 [2024-12-05 09:55:10.260590] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:22.794 [2024-12-05 09:55:10.260922] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:22.794 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.794 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=91048 00:23:22.794 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:22.794 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 91048 /var/tmp/bdevperf.sock 00:23:22.794 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 91048 ']' 00:23:22.794 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.794 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.794 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.794 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.794 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.794 [2024-12-05 09:55:10.376102] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:23:22.794 [2024-12-05 09:55:10.376271] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91048 ] 00:23:23.051 [2024-12-05 09:55:10.547305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.310 [2024-12-05 09:55:10.700109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.876 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.876 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:23.876 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Dd3IU6Mn4R 00:23:24.134 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:24.392 [2024-12-05 09:55:11.847483] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:24.392 nvme0n1 00:23:24.392 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:24.650 Running I/O for 1 seconds... 00:23:25.584 3368.00 IOPS, 13.16 MiB/s 00:23:25.584 Latency(us) 00:23:25.584 [2024-12-05T09:55:13.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.584 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:25.584 Verification LBA range: start 0x0 length 0x2000 00:23:25.584 nvme0n1 : 1.02 3431.52 13.40 0.00 0.00 36907.98 4051.32 25380.31 00:23:25.584 [2024-12-05T09:55:13.213Z] =================================================================================================================== 00:23:25.584 [2024-12-05T09:55:13.213Z] Total : 3431.52 13.40 0.00 0.00 36907.98 4051.32 25380.31 00:23:25.584 { 00:23:25.584 "results": [ 00:23:25.584 { 00:23:25.584 "job": "nvme0n1", 00:23:25.584 "core_mask": "0x2", 00:23:25.584 "workload": "verify", 00:23:25.584 "status": "finished", 00:23:25.584 "verify_range": { 00:23:25.584 "start": 0, 00:23:25.584 "length": 8192 00:23:25.584 }, 00:23:25.584 "queue_depth": 128, 00:23:25.584 "io_size": 4096, 00:23:25.584 "runtime": 1.019083, 00:23:25.584 "iops": 3431.516373053029, 00:23:25.584 "mibps": 13.404360832238394, 00:23:25.584 "io_failed": 0, 00:23:25.584 "io_timeout": 0, 00:23:25.584 "avg_latency_us": 36907.97660332233, 00:23:25.584 "min_latency_us": 4051.316363636364, 00:23:25.584 "max_latency_us": 25380.305454545454 00:23:25.584 } 00:23:25.584 ], 00:23:25.584 "core_count": 1 00:23:25.584 } 00:23:25.584 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:25.584 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.584 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.843 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.843 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:25.843 "subsystems": [ 00:23:25.843 { 00:23:25.843 "subsystem": "keyring", 00:23:25.843 "config": [ 00:23:25.843 { 00:23:25.843 "method": "keyring_file_add_key", 00:23:25.843 "params": { 00:23:25.843 "name": "key0", 00:23:25.843 "path": "/tmp/tmp.Dd3IU6Mn4R" 00:23:25.843 } 00:23:25.843 } 00:23:25.843 ] 00:23:25.843 }, 00:23:25.843 { 00:23:25.843 "subsystem": "iobuf", 00:23:25.843 "config": [ 00:23:25.843 { 00:23:25.843 "method": "iobuf_set_options", 00:23:25.843 "params": { 00:23:25.843 "enable_numa": false, 00:23:25.843 "large_bufsize": 135168, 00:23:25.843 "large_pool_count": 1024, 00:23:25.843 "small_bufsize": 8192, 00:23:25.843 "small_pool_count": 8192 00:23:25.843 } 00:23:25.843 } 00:23:25.843 ] 00:23:25.843 }, 00:23:25.843 { 00:23:25.843 "subsystem": "sock", 00:23:25.843 "config": [ 00:23:25.843 { 00:23:25.843 "method": "sock_set_default_impl", 00:23:25.843 "params": { 00:23:25.843 "impl_name": "posix" 00:23:25.843 } 00:23:25.843 }, 00:23:25.843 { 00:23:25.843 "method": "sock_impl_set_options", 00:23:25.843 "params": { 00:23:25.843 "enable_ktls": false, 00:23:25.843 "enable_placement_id": 0, 00:23:25.843 "enable_quickack": false, 00:23:25.843 "enable_recv_pipe": true, 00:23:25.843 "enable_zerocopy_send_client": false, 00:23:25.843 "enable_zerocopy_send_server": true, 00:23:25.843 "impl_name": "ssl", 00:23:25.843 "recv_buf_size": 4096, 00:23:25.843 "send_buf_size": 4096, 00:23:25.843 "tls_version": 0, 00:23:25.843 "zerocopy_threshold": 0 00:23:25.843 } 00:23:25.843 }, 00:23:25.843 { 00:23:25.843 "method": "sock_impl_set_options", 00:23:25.843 "params": { 00:23:25.843 "enable_ktls": false, 00:23:25.843 "enable_placement_id": 0, 00:23:25.843 "enable_quickack": false, 00:23:25.843 "enable_recv_pipe": true, 00:23:25.843 "enable_zerocopy_send_client": false, 00:23:25.843 "enable_zerocopy_send_server": true, 00:23:25.843 "impl_name": "posix", 00:23:25.843 "recv_buf_size": 2097152, 00:23:25.843 "send_buf_size": 2097152, 00:23:25.843 "tls_version": 0, 00:23:25.843 "zerocopy_threshold": 0 00:23:25.843 } 00:23:25.843 } 00:23:25.843 ] 00:23:25.843 }, 00:23:25.843 { 00:23:25.843 "subsystem": "vmd", 00:23:25.843 "config": [] 00:23:25.843 }, 00:23:25.843 { 00:23:25.843 "subsystem": "accel", 00:23:25.843 "config": [ 00:23:25.843 { 00:23:25.843 "method": "accel_set_options", 00:23:25.843 "params": { 00:23:25.843 "buf_count": 2048, 00:23:25.843 "large_cache_size": 16, 00:23:25.843 "sequence_count": 2048, 00:23:25.843 "small_cache_size": 128, 00:23:25.843 "task_count": 2048 00:23:25.843 } 00:23:25.843 } 00:23:25.843 ] 00:23:25.843 }, 00:23:25.843 { 00:23:25.843 "subsystem": "bdev", 00:23:25.843 "config": [ 00:23:25.843 { 00:23:25.843 "method": "bdev_set_options", 00:23:25.843 "params": { 00:23:25.843 "bdev_auto_examine": true, 00:23:25.843 "bdev_io_cache_size": 256, 00:23:25.843 "bdev_io_pool_size": 65535, 00:23:25.843 "iobuf_large_cache_size": 16, 00:23:25.843 "iobuf_small_cache_size": 128 00:23:25.843 } 00:23:25.843 }, 00:23:25.843 { 00:23:25.843 "method": "bdev_raid_set_options", 00:23:25.843 "params": { 00:23:25.843 "process_max_bandwidth_mb_sec": 0, 00:23:25.843 "process_window_size_kb": 1024 00:23:25.843 } 00:23:25.843 }, 00:23:25.843 { 00:23:25.843 "method": "bdev_iscsi_set_options", 00:23:25.843 "params": { 00:23:25.843 "timeout_sec": 30 00:23:25.843 } 00:23:25.843 }, 00:23:25.843 { 00:23:25.843 "method": "bdev_nvme_set_options", 00:23:25.843 "params": { 00:23:25.843 "action_on_timeout": "none", 00:23:25.843 "allow_accel_sequence": false, 00:23:25.843 "arbitration_burst": 0, 00:23:25.843 "bdev_retry_count": 3, 00:23:25.843 "ctrlr_loss_timeout_sec": 0, 00:23:25.843 "delay_cmd_submit": true, 00:23:25.843 "dhchap_dhgroups": [ 00:23:25.843 "null", 00:23:25.843 "ffdhe2048", 00:23:25.843 "ffdhe3072", 00:23:25.843 "ffdhe4096", 00:23:25.843 "ffdhe6144", 00:23:25.843 "ffdhe8192" 00:23:25.843 ], 00:23:25.843 "dhchap_digests": [ 00:23:25.843 "sha256", 00:23:25.843 "sha384", 00:23:25.843 "sha512" 00:23:25.843 ], 00:23:25.843 "disable_auto_failback": false, 00:23:25.843 "fast_io_fail_timeout_sec": 0, 00:23:25.843 "generate_uuids": false, 00:23:25.843 "high_priority_weight": 0, 00:23:25.843 "io_path_stat": false, 00:23:25.843 "io_queue_requests": 0, 00:23:25.843 "keep_alive_timeout_ms": 10000, 00:23:25.843 "low_priority_weight": 0, 00:23:25.843 "medium_priority_weight": 0, 00:23:25.843 "nvme_adminq_poll_period_us": 10000, 00:23:25.843 "nvme_error_stat": false, 00:23:25.843 "nvme_ioq_poll_period_us": 0, 00:23:25.843 "rdma_cm_event_timeout_ms": 0, 00:23:25.843 "rdma_max_cq_size": 0, 00:23:25.843 "rdma_srq_size": 0, 00:23:25.843 "reconnect_delay_sec": 0, 00:23:25.843 "timeout_admin_us": 0, 00:23:25.843 "timeout_us": 0, 00:23:25.843 "transport_ack_timeout": 0, 00:23:25.843 "transport_retry_count": 4, 00:23:25.843 "transport_tos": 0 00:23:25.843 } 00:23:25.843 }, 00:23:25.843 { 00:23:25.843 "method": "bdev_nvme_set_hotplug", 00:23:25.843 "params": { 00:23:25.843 "enable": false, 00:23:25.843 "period_us": 100000 00:23:25.843 } 00:23:25.843 }, 00:23:25.843 { 00:23:25.843 "method": "bdev_malloc_create", 00:23:25.843 "params": { 00:23:25.843 "block_size": 4096, 00:23:25.843 "dif_is_head_of_md": false, 00:23:25.843 "dif_pi_format": 0, 00:23:25.843 "dif_type": 0, 00:23:25.843 "md_size": 0, 00:23:25.843 "name": "malloc0", 00:23:25.843 "num_blocks": 8192, 00:23:25.843 "optimal_io_boundary": 0, 00:23:25.843 "physical_block_size": 4096, 00:23:25.843 "uuid": "ad40868d-d3f7-48b0-86f0-f3bb29868c0c" 00:23:25.843 } 00:23:25.843 }, 00:23:25.843 { 00:23:25.843 "method": "bdev_wait_for_examine" 00:23:25.843 } 00:23:25.843 ] 00:23:25.843 }, 00:23:25.843 { 00:23:25.843 "subsystem": "nbd", 00:23:25.843 "config": [] 00:23:25.843 }, 00:23:25.843 { 00:23:25.843 "subsystem": "scheduler", 00:23:25.843 "config": [ 00:23:25.843 { 00:23:25.843 "method": "framework_set_scheduler", 00:23:25.843 "params": { 00:23:25.843 "name": "static" 00:23:25.843 } 00:23:25.843 } 00:23:25.843 ] 00:23:25.843 }, 00:23:25.843 { 00:23:25.843 "subsystem": "nvmf", 00:23:25.843 "config": [ 00:23:25.843 { 00:23:25.843 "method": "nvmf_set_config", 00:23:25.843 "params": { 00:23:25.843 "admin_cmd_passthru": { 00:23:25.843 "identify_ctrlr": false 00:23:25.843 }, 00:23:25.843 "dhchap_dhgroups": [ 00:23:25.843 "null", 00:23:25.843 "ffdhe2048", 00:23:25.843 "ffdhe3072", 00:23:25.843 "ffdhe4096", 00:23:25.843 "ffdhe6144", 00:23:25.843 "ffdhe8192" 00:23:25.843 ], 00:23:25.843 "dhchap_digests": [ 00:23:25.843 "sha256", 00:23:25.843 "sha384", 00:23:25.843 "sha512" 00:23:25.843 ], 00:23:25.843 "discovery_filter": "match_any" 00:23:25.843 } 00:23:25.843 }, 00:23:25.843 { 00:23:25.843 "method": "nvmf_set_max_subsystems", 00:23:25.843 "params": { 00:23:25.843 "max_subsystems": 1024 00:23:25.843 } 00:23:25.843 }, 00:23:25.843 { 00:23:25.843 "method": "nvmf_set_crdt", 00:23:25.843 "params": { 00:23:25.843 "crdt1": 0, 00:23:25.843 "crdt2": 0, 00:23:25.843 "crdt3": 0 00:23:25.843 } 00:23:25.843 }, 00:23:25.843 { 00:23:25.843 "method": "nvmf_create_transport", 00:23:25.843 "params": { 00:23:25.843 "abort_timeout_sec": 1, 00:23:25.843 "ack_timeout": 0, 00:23:25.843 "buf_cache_size": 4294967295, 00:23:25.843 "c2h_success": false, 00:23:25.843 "data_wr_pool_size": 0, 00:23:25.843 "dif_insert_or_strip": false, 00:23:25.843 "in_capsule_data_size": 4096, 00:23:25.843 "io_unit_size": 131072, 00:23:25.843 "max_aq_depth": 128, 00:23:25.843 "max_io_qpairs_per_ctrlr": 127, 00:23:25.843 "max_io_size": 131072, 00:23:25.843 "max_queue_depth": 128, 00:23:25.844 "num_shared_buffers": 511, 00:23:25.844 "sock_priority": 0, 00:23:25.844 "trtype": "TCP", 00:23:25.844 "zcopy": false 00:23:25.844 } 00:23:25.844 }, 00:23:25.844 { 00:23:25.844 "method": "nvmf_create_subsystem", 00:23:25.844 "params": { 00:23:25.844 "allow_any_host": false, 00:23:25.844 "ana_reporting": false, 00:23:25.844 "max_cntlid": 65519, 00:23:25.844 "max_namespaces": 32, 00:23:25.844 "min_cntlid": 1, 00:23:25.844 "model_number": "SPDK bdev Controller", 00:23:25.844 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.844 "serial_number": "00000000000000000000" 00:23:25.844 } 00:23:25.844 }, 00:23:25.844 { 00:23:25.844 "method": "nvmf_subsystem_add_host", 00:23:25.844 "params": { 00:23:25.844 "host": "nqn.2016-06.io.spdk:host1", 00:23:25.844 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.844 "psk": "key0" 00:23:25.844 } 00:23:25.844 }, 00:23:25.844 { 00:23:25.844 "method": "nvmf_subsystem_add_ns", 00:23:25.844 "params": { 00:23:25.844 "namespace": { 00:23:25.844 "bdev_name": "malloc0", 00:23:25.844 "nguid": "AD40868DD3F748B086F0F3BB29868C0C", 00:23:25.844 "no_auto_visible": false, 00:23:25.844 "nsid": 1, 00:23:25.844 "uuid": "ad40868d-d3f7-48b0-86f0-f3bb29868c0c" 00:23:25.844 }, 00:23:25.844 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:23:25.844 } 00:23:25.844 }, 00:23:25.844 { 00:23:25.844 "method": "nvmf_subsystem_add_listener", 00:23:25.844 "params": { 00:23:25.844 "listen_address": { 00:23:25.844 "adrfam": "IPv4", 00:23:25.844 "traddr": "10.0.0.3", 00:23:25.844 "trsvcid": "4420", 00:23:25.844 "trtype": "TCP" 00:23:25.844 }, 00:23:25.844 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.844 "secure_channel": false, 00:23:25.844 "sock_impl": "ssl" 00:23:25.844 } 00:23:25.844 } 00:23:25.844 ] 00:23:25.844 } 00:23:25.844 ] 00:23:25.844 }' 00:23:25.844 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:26.103 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:26.103 "subsystems": [ 00:23:26.103 { 00:23:26.103 "subsystem": "keyring", 00:23:26.103 "config": [ 00:23:26.103 { 00:23:26.103 "method": "keyring_file_add_key", 00:23:26.103 "params": { 00:23:26.103 "name": "key0", 00:23:26.103 "path": "/tmp/tmp.Dd3IU6Mn4R" 00:23:26.103 } 00:23:26.103 } 00:23:26.103 ] 00:23:26.103 }, 00:23:26.103 { 00:23:26.103 "subsystem": "iobuf", 00:23:26.103 "config": [ 00:23:26.103 { 00:23:26.103 "method": "iobuf_set_options", 00:23:26.103 "params": { 00:23:26.103 "enable_numa": false, 00:23:26.103 "large_bufsize": 135168, 00:23:26.103 "large_pool_count": 1024, 00:23:26.103 "small_bufsize": 8192, 00:23:26.103 "small_pool_count": 8192 00:23:26.103 } 00:23:26.103 } 00:23:26.103 ] 00:23:26.104 }, 00:23:26.104 { 00:23:26.104 "subsystem": "sock", 00:23:26.104 "config": [ 00:23:26.104 { 00:23:26.104 "method": "sock_set_default_impl", 00:23:26.104 "params": { 00:23:26.104 "impl_name": "posix" 00:23:26.104 } 00:23:26.104 }, 00:23:26.104 { 00:23:26.104 "method": "sock_impl_set_options", 00:23:26.104 "params": { 00:23:26.104 "enable_ktls": false, 00:23:26.104 "enable_placement_id": 0, 00:23:26.104 "enable_quickack": false, 00:23:26.104 "enable_recv_pipe": true, 00:23:26.104 "enable_zerocopy_send_client": false, 00:23:26.104 "enable_zerocopy_send_server": true, 00:23:26.104 "impl_name": "ssl", 00:23:26.104 "recv_buf_size": 4096, 00:23:26.104 "send_buf_size": 4096, 00:23:26.104 "tls_version": 0, 00:23:26.104 "zerocopy_threshold": 0 00:23:26.104 } 00:23:26.104 }, 00:23:26.104 { 00:23:26.104 "method": "sock_impl_set_options", 00:23:26.104 "params": { 00:23:26.104 "enable_ktls": false, 00:23:26.104 "enable_placement_id": 0, 00:23:26.104 "enable_quickack": false, 00:23:26.104 "enable_recv_pipe": true, 00:23:26.104 "enable_zerocopy_send_client": false, 00:23:26.104 "enable_zerocopy_send_server": true, 00:23:26.104 "impl_name": "posix", 00:23:26.104 "recv_buf_size": 2097152, 00:23:26.104 "send_buf_size": 2097152, 00:23:26.104 "tls_version": 0, 00:23:26.104 "zerocopy_threshold": 0 00:23:26.104 } 00:23:26.104 } 00:23:26.104 ] 00:23:26.104 }, 00:23:26.104 { 00:23:26.104 "subsystem": "vmd", 00:23:26.104 "config": [] 00:23:26.104 }, 00:23:26.104 { 00:23:26.104 "subsystem": "accel", 00:23:26.104 "config": [ 00:23:26.104 { 00:23:26.104 "method": "accel_set_options", 00:23:26.104 "params": { 00:23:26.104 "buf_count": 2048, 00:23:26.104 "large_cache_size": 16, 00:23:26.104 "sequence_count": 2048, 00:23:26.104 "small_cache_size": 128, 00:23:26.104 "task_count": 2048 00:23:26.104 } 00:23:26.104 } 00:23:26.104 ] 00:23:26.104 }, 00:23:26.104 { 00:23:26.104 "subsystem": "bdev", 00:23:26.104 "config": [ 00:23:26.104 { 00:23:26.104 "method": "bdev_set_options", 00:23:26.104 "params": { 00:23:26.104 "bdev_auto_examine": true, 00:23:26.104 "bdev_io_cache_size": 256, 00:23:26.104 "bdev_io_pool_size": 65535, 00:23:26.104 "iobuf_large_cache_size": 16, 00:23:26.104 "iobuf_small_cache_size": 128 00:23:26.104 } 00:23:26.104 }, 00:23:26.104 { 00:23:26.104 "method": "bdev_raid_set_options", 00:23:26.104 "params": { 00:23:26.104 "process_max_bandwidth_mb_sec": 0, 00:23:26.104 "process_window_size_kb": 1024 00:23:26.104 } 00:23:26.104 }, 00:23:26.104 { 00:23:26.104 "method": "bdev_iscsi_set_options", 00:23:26.104 "params": { 00:23:26.104 "timeout_sec": 30 00:23:26.104 } 00:23:26.104 }, 00:23:26.104 { 00:23:26.104 "method": "bdev_nvme_set_options", 00:23:26.104 "params": { 00:23:26.104 "action_on_timeout": "none", 00:23:26.104 "allow_accel_sequence": false, 00:23:26.104 "arbitration_burst": 0, 00:23:26.104 "bdev_retry_count": 3, 00:23:26.104 "ctrlr_loss_timeout_sec": 0, 00:23:26.104 "delay_cmd_submit": true, 00:23:26.104 "dhchap_dhgroups": [ 00:23:26.104 "null", 00:23:26.104 "ffdhe2048", 00:23:26.104 "ffdhe3072", 00:23:26.104 "ffdhe4096", 00:23:26.104 "ffdhe6144", 00:23:26.104 "ffdhe8192" 00:23:26.104 ], 00:23:26.104 "dhchap_digests": [ 00:23:26.104 "sha256", 00:23:26.104 "sha384", 00:23:26.104 "sha512" 00:23:26.104 ], 00:23:26.104 "disable_auto_failback": false, 00:23:26.104 "fast_io_fail_timeout_sec": 0, 00:23:26.104 "generate_uuids": false, 00:23:26.104 "high_priority_weight": 0, 00:23:26.104 "io_path_stat": false, 00:23:26.104 "io_queue_requests": 512, 00:23:26.104 "keep_alive_timeout_ms": 10000, 00:23:26.104 "low_priority_weight": 0, 00:23:26.104 "medium_priority_weight": 0, 00:23:26.104 "nvme_adminq_poll_period_us": 10000, 00:23:26.104 "nvme_error_stat": false, 00:23:26.104 "nvme_ioq_poll_period_us": 0, 00:23:26.104 "rdma_cm_event_timeout_ms": 0, 00:23:26.104 "rdma_max_cq_size": 0, 00:23:26.104 "rdma_srq_size": 0, 00:23:26.104 "reconnect_delay_sec": 0, 00:23:26.104 "timeout_admin_us": 0, 00:23:26.104 "timeout_us": 0, 00:23:26.104 "transport_ack_timeout": 0, 00:23:26.104 "transport_retry_count": 4, 00:23:26.104 "transport_tos": 0 00:23:26.104 } 00:23:26.104 }, 00:23:26.104 { 00:23:26.104 "method": "bdev_nvme_attach_controller", 00:23:26.104 "params": { 00:23:26.104 "adrfam": "IPv4", 00:23:26.104 "ctrlr_loss_timeout_sec": 0, 00:23:26.104 "ddgst": false, 00:23:26.104 "fast_io_fail_timeout_sec": 0, 00:23:26.104 "hdgst": false, 00:23:26.104 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.104 "multipath": "multipath", 00:23:26.104 "name": "nvme0", 00:23:26.104 "prchk_guard": false, 00:23:26.104 "prchk_reftag": false, 00:23:26.104 "psk": "key0", 00:23:26.104 "reconnect_delay_sec": 0, 00:23:26.104 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.104 "traddr": "10.0.0.3", 00:23:26.104 "trsvcid": "4420", 00:23:26.104 "trtype": "TCP" 00:23:26.104 } 00:23:26.104 }, 00:23:26.104 { 00:23:26.104 "method": "bdev_nvme_set_hotplug", 00:23:26.104 "params": { 00:23:26.104 "enable": false, 00:23:26.104 "period_us": 100000 00:23:26.104 } 00:23:26.104 }, 00:23:26.104 { 00:23:26.104 "method": "bdev_enable_histogram", 00:23:26.104 "params": { 00:23:26.104 "enable": true, 00:23:26.104 "name": "nvme0n1" 00:23:26.104 } 00:23:26.104 }, 00:23:26.104 { 00:23:26.104 "method": "bdev_wait_for_examine" 00:23:26.104 } 00:23:26.104 ] 00:23:26.104 }, 00:23:26.104 { 00:23:26.104 "subsystem": "nbd", 00:23:26.104 "config": [] 00:23:26.104 } 00:23:26.104 ] 00:23:26.104 }' 00:23:26.104 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 91048 00:23:26.104 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 91048 ']' 00:23:26.104 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 91048 00:23:26.104 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:26.104 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.104 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91048 00:23:26.104 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:26.104 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:26.104 killing process with pid 91048 00:23:26.104 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91048' 00:23:26.104 Received shutdown signal, test time was about 1.000000 seconds 00:23:26.104 00:23:26.104 Latency(us) 00:23:26.104 [2024-12-05T09:55:13.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.104 [2024-12-05T09:55:13.733Z] =================================================================================================================== 00:23:26.104 [2024-12-05T09:55:13.733Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.104 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 91048 00:23:26.104 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 91048 00:23:27.039 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 90998 00:23:27.039 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 90998 ']' 00:23:27.039 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 90998 00:23:27.039 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:27.039 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.039 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90998 00:23:27.039 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:27.039 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:27.039 killing process with pid 90998 00:23:27.039 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90998' 00:23:27.039 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 90998 00:23:27.039 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 90998 00:23:27.990 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:27.990 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:27.990 "subsystems": [ 00:23:27.990 { 00:23:27.990 "subsystem": "keyring", 00:23:27.990 "config": [ 00:23:27.990 { 00:23:27.990 "method": "keyring_file_add_key", 00:23:27.990 "params": { 00:23:27.990 "name": "key0", 00:23:27.990 "path": "/tmp/tmp.Dd3IU6Mn4R" 00:23:27.990 } 00:23:27.990 } 00:23:27.990 ] 00:23:27.990 }, 00:23:27.990 { 00:23:27.990 "subsystem": "iobuf", 00:23:27.990 "config": [ 00:23:27.990 { 00:23:27.990 "method": "iobuf_set_options", 00:23:27.990 "params": { 00:23:27.990 "enable_numa": false, 00:23:27.990 "large_bufsize": 135168, 00:23:27.990 "large_pool_count": 1024, 00:23:27.990 "small_bufsize": 8192, 00:23:27.990 "small_pool_count": 8192 00:23:27.990 } 00:23:27.990 } 00:23:27.990 ] 00:23:27.990 }, 00:23:27.990 { 00:23:27.990 "subsystem": "sock", 00:23:27.990 "config": [ 00:23:27.990 { 00:23:27.990 "method": "sock_set_default_impl", 00:23:27.990 "params": { 00:23:27.990 "impl_name": "posix" 00:23:27.990 } 00:23:27.990 }, 00:23:27.990 { 00:23:27.990 "method": "sock_impl_set_options", 00:23:27.990 "params": { 00:23:27.990 "enable_ktls": false, 00:23:27.990 "enable_placement_id": 0, 00:23:27.990 "enable_quickack": false, 00:23:27.990 "enable_recv_pipe": true, 00:23:27.990 "enable_zerocopy_send_client": false, 00:23:27.990 "enable_zerocopy_send_server": true, 00:23:27.990 "impl_name": "ssl", 00:23:27.990 "recv_buf_size": 4096, 00:23:27.990 "send_buf_size": 4096, 00:23:27.990 "tls_version": 0, 00:23:27.990 "zerocopy_threshold": 0 00:23:27.990 } 00:23:27.990 }, 00:23:27.990 { 00:23:27.990 "method": "sock_impl_set_options", 00:23:27.990 "params": { 00:23:27.990 "enable_ktls": false, 00:23:27.990 "enable_placement_id": 0, 00:23:27.990 "enable_quickack": false, 00:23:27.990 "enable_recv_pipe": true, 00:23:27.990 "enable_zerocopy_send_client": false, 00:23:27.990 "enable_zerocopy_send_server": true, 00:23:27.990 "impl_name": "posix", 00:23:27.990 "recv_buf_size": 2097152, 00:23:27.990 "send_buf_size": 2097152, 00:23:27.990 "tls_version": 0, 00:23:27.990 "zerocopy_threshold": 0 00:23:27.990 } 00:23:27.990 } 00:23:27.990 ] 00:23:27.990 }, 00:23:27.990 { 00:23:27.990 "subsystem": "vmd", 00:23:27.990 "config": [] 00:23:27.990 }, 00:23:27.990 { 00:23:27.990 "subsystem": "accel", 00:23:27.990 "config": [ 00:23:27.990 { 00:23:27.990 "method": "accel_set_options", 00:23:27.990 "params": { 00:23:27.990 "buf_count": 2048, 00:23:27.990 "large_cache_size": 16, 00:23:27.990 "sequence_count": 2048, 00:23:27.990 "small_cache_size": 128, 00:23:27.990 "task_count": 2048 00:23:27.990 } 00:23:27.990 } 00:23:27.990 ] 00:23:27.990 }, 00:23:27.990 { 00:23:27.990 "subsystem": "bdev", 00:23:27.990 "config": [ 00:23:27.990 { 00:23:27.990 "method": "bdev_set_options", 00:23:27.990 "params": { 00:23:27.990 "bdev_auto_examine": true, 00:23:27.990 "bdev_io_cache_size": 256, 00:23:27.990 "bdev_io_pool_size": 65535, 00:23:27.990 "iobuf_large_cache_size": 16, 00:23:27.990 "iobuf_small_cache_size": 128 00:23:27.990 } 00:23:27.990 }, 00:23:27.990 { 00:23:27.990 "method": "bdev_raid_set_options", 00:23:27.990 "params": { 00:23:27.990 "process_max_bandwidth_mb_sec": 0, 00:23:27.990 "process_window_size_kb": 1024 00:23:27.990 } 00:23:27.990 }, 00:23:27.990 { 00:23:27.990 "method": "bdev_iscsi_set_options", 00:23:27.990 "params": { 00:23:27.990 "timeout_sec": 30 00:23:27.990 } 00:23:27.990 }, 00:23:27.990 { 00:23:27.990 "method": "bdev_nvme_set_options", 00:23:27.990 "params": { 00:23:27.990 "action_on_timeout": "none", 00:23:27.990 "allow_accel_sequence": false, 00:23:27.990 "arbitration_burst": 0, 00:23:27.990 "bdev_retry_count": 3, 00:23:27.990 "ctrlr_loss_timeout_sec": 0, 00:23:27.990 "delay_cmd_submit": true, 00:23:27.990 "dhchap_dhgroups": [ 00:23:27.990 "null", 00:23:27.990 "ffdhe2048", 00:23:27.990 "ffdhe3072", 00:23:27.990 "ffdhe4096", 00:23:27.990 "ffdhe6144", 00:23:27.990 "ffdhe8192" 00:23:27.990 ], 00:23:27.990 "dhchap_digests": [ 00:23:27.990 "sha256", 00:23:27.990 "sha384", 00:23:27.990 "sha512" 00:23:27.990 ], 00:23:27.990 "disable_auto_failback": false, 00:23:27.990 "fast_io_fail_timeout_sec": 0, 00:23:27.990 "generate_uuids": false, 00:23:27.990 "high_priority_weight": 0, 00:23:27.990 "io_path_stat": false, 00:23:27.990 "io_queue_requests": 0, 00:23:27.990 "keep_alive_timeout_ms": 10000, 00:23:27.990 "low_priority_weight": 0, 00:23:27.990 "medium_priority_weight": 0, 00:23:27.990 "nvme_adminq_poll_period_us": 10000, 00:23:27.990 "nvme_error_stat": false, 00:23:27.990 "nvme_ioq_poll_period_us": 0, 00:23:27.991 "rdma_cm_event_timeout_ms": 0, 00:23:27.991 "rdma_max_cq_size": 0, 00:23:27.991 "rdma_srq_size": 0, 00:23:27.991 "reconnect_delay_sec": 0, 00:23:27.991 "timeout_admin_us": 0, 00:23:27.991 "timeout_us": 0, 00:23:27.991 "transport_ack_timeout": 0, 00:23:27.991 "transport_retry_count": 4, 00:23:27.991 "transport_tos": 0 00:23:27.991 } 00:23:27.991 }, 00:23:27.991 { 00:23:27.991 "method": "bdev_nvme_set_hotplug", 00:23:27.991 "params": { 00:23:27.991 "enable": false, 00:23:27.991 "period_us": 100000 00:23:27.991 } 00:23:27.991 }, 00:23:27.991 { 00:23:27.991 "method": "bdev_malloc_create", 00:23:27.991 "params": { 00:23:27.991 "block_size": 4096, 00:23:27.991 "dif_is_head_of_md": false, 00:23:27.991 "dif_pi_format": 0, 00:23:27.991 "dif_type": 0, 00:23:27.991 "md_size": 0, 00:23:27.991 "name": "malloc0", 00:23:27.991 "num_blocks": 8192, 00:23:27.991 "optimal_io_boundary": 0, 00:23:27.991 "physical_block_size": 4096, 00:23:27.991 "uuid": "ad40868d-d3f7-48b0-86f0-f3bb29868c0c" 00:23:27.991 } 00:23:27.991 }, 00:23:27.991 { 00:23:27.991 "method": "bdev_wait_for_examine" 00:23:27.991 } 00:23:27.991 ] 00:23:27.991 }, 00:23:27.991 { 00:23:27.991 "subsystem": "nbd", 00:23:27.991 "config": [] 00:23:27.991 }, 00:23:27.991 { 00:23:27.991 "subsystem": "scheduler", 00:23:27.991 "config": [ 00:23:27.991 { 00:23:27.991 "method": "framework_set_scheduler", 00:23:27.991 "params": { 00:23:27.991 "name": "static" 00:23:27.991 } 00:23:27.991 } 00:23:27.991 ] 00:23:27.991 }, 00:23:27.991 { 00:23:27.991 "subsystem": "nvmf", 00:23:27.991 "config": [ 00:23:27.991 { 00:23:27.991 "method": "nvmf_set_config", 00:23:27.991 "params": { 00:23:27.991 "admin_cmd_passthru": { 00:23:27.991 "identify_ctrlr": false 00:23:27.991 }, 00:23:27.991 "dhchap_dhgroups": [ 00:23:27.991 "null", 00:23:27.991 "ffdhe2048", 00:23:27.991 "ffdhe3072", 00:23:27.991 "ffdhe4096", 00:23:27.991 "ffdhe6144", 00:23:27.991 "ffdhe8192" 00:23:27.991 ], 00:23:27.991 "dhchap_digests": [ 00:23:27.991 "sha256", 00:23:27.991 "sha384", 00:23:27.991 "sha512" 00:23:27.991 ], 00:23:27.991 "discovery_filter": "match_any" 00:23:27.991 } 00:23:27.991 }, 00:23:27.991 { 00:23:27.991 "method": "nvmf_set_max_subsystems", 00:23:27.991 "params": { 00:23:27.991 "max_subsystems": 1024 00:23:27.991 } 00:23:27.991 }, 00:23:27.991 { 00:23:27.991 "method": "nvmf_set_crdt", 00:23:27.991 "params": { 00:23:27.991 "crdt1": 0, 00:23:27.991 "crdt2": 0, 00:23:27.991 "crdt3": 0 00:23:27.991 } 00:23:27.991 }, 00:23:27.991 { 00:23:27.991 "method": "nvmf_create_transport", 00:23:27.991 "params": { 00:23:27.991 "abort_timeout_sec": 1, 00:23:27.991 "ack_timeout": 0, 00:23:27.991 "buf_cache_size": 4294967295, 00:23:27.991 "c2h_success": false, 00:23:27.991 "data_wr_pool_size": 0, 00:23:27.991 "dif_insert_or_strip": false, 00:23:27.991 "in_capsule_data_size": 4096, 00:23:27.991 "io_unit_size": 131072, 00:23:27.991 "max_aq_depth": 128, 00:23:27.991 "max_io_qpairs_per_ctrlr": 127, 00:23:27.991 "max_io_size": 131072, 00:23:27.991 "max_queue_depth": 128, 00:23:27.991 "num_shared_buffers": 511, 00:23:27.991 "sock_priority": 0, 00:23:27.991 "trtype": "TCP", 00:23:27.991 "zcopy": false 00:23:27.991 } 00:23:27.991 }, 00:23:27.991 { 00:23:27.991 "method": "nvmf_create_subsystem", 00:23:27.991 "params": { 00:23:27.991 "allow_any_host": false, 00:23:27.991 "ana_reporting": false, 00:23:27.991 "max_cntlid": 65519, 00:23:27.991 "max_namespaces": 32, 00:23:27.991 "min_cntlid": 1, 00:23:27.991 "model_number": "SPDK bdev Controller", 00:23:27.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.991 "serial_number": "00000000000000000000" 00:23:27.991 } 00:23:27.991 }, 00:23:27.991 { 00:23:27.991 "method": "nvmf_subsystem_add_host", 00:23:27.991 "params": { 00:23:27.991 "host": "nqn.2016-06.io.spdk:host1", 00:23:27.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.991 "psk": "key0" 00:23:27.991 } 00:23:27.991 }, 00:23:27.991 { 00:23:27.991 "method": "nvmf_subsystem_add_ns", 00:23:27.991 "params": { 00:23:27.991 "namespace": { 00:23:27.991 "bdev_name": "malloc0", 00:23:27.991 "nguid": "AD40868DD3F748B086F0F3BB29868C0C", 00:23:27.991 "no_auto_visible": false, 00:23:27.991 "nsid": 1, 00:23:27.991 "uuid": "ad40868d-d3f7-48b0-86f0-f3bb29868c0c" 00:23:27.991 }, 00:23:27.991 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:23:27.991 } 00:23:27.991 }, 00:23:27.991 { 00:23:27.991 "method": "nvmf_subsystem_add_listener", 00:23:27.991 "params": { 00:23:27.991 "listen_address": { 00:23:27.991 "adrfam": "IPv4", 00:23:27.991 "traddr": "10.0.0.3", 00:23:27.991 "trsvcid": "4420", 00:23:27.991 "trtype": "TCP" 00:23:27.991 }, 00:23:27.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.991 "secure_channel": false, 00:23:27.991 "sock_impl": "ssl" 00:23:27.991 } 00:23:27.991 } 00:23:27.991 ] 00:23:27.991 } 00:23:27.991 ] 00:23:27.991 }' 00:23:27.991 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:27.991 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:27.991 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.991 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=91157 00:23:27.991 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:27.991 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 91157 00:23:27.991 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 91157 ']' 00:23:27.991 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.991 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.991 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.991 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.991 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.249 [2024-12-05 09:55:15.696995] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:23:28.249 [2024-12-05 09:55:15.697916] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.507 [2024-12-05 09:55:15.882826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.507 [2024-12-05 09:55:15.993912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.507 [2024-12-05 09:55:15.993996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.507 [2024-12-05 09:55:15.994015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.507 [2024-12-05 09:55:15.994043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.508 [2024-12-05 09:55:15.994066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.508 [2024-12-05 09:55:15.995262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.075 [2024-12-05 09:55:16.483611] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.075 [2024-12-05 09:55:16.515559] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:29.075 [2024-12-05 09:55:16.515864] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:29.075 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.075 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:29.075 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:29.075 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:29.075 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.075 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.075 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=91200 00:23:29.075 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 91200 /var/tmp/bdevperf.sock 00:23:29.075 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 91200 ']' 00:23:29.075 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.075 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.075 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:29.075 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.075 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.075 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:29.075 "subsystems": [ 00:23:29.075 { 00:23:29.075 "subsystem": "keyring", 00:23:29.075 "config": [ 00:23:29.075 { 00:23:29.075 "method": "keyring_file_add_key", 00:23:29.075 "params": { 00:23:29.075 "name": "key0", 00:23:29.075 "path": "/tmp/tmp.Dd3IU6Mn4R" 00:23:29.075 } 00:23:29.075 } 00:23:29.075 ] 00:23:29.075 }, 00:23:29.075 { 00:23:29.075 "subsystem": "iobuf", 00:23:29.075 "config": [ 00:23:29.075 { 00:23:29.075 "method": "iobuf_set_options", 00:23:29.075 "params": { 00:23:29.075 "enable_numa": false, 00:23:29.075 "large_bufsize": 135168, 00:23:29.075 "large_pool_count": 1024, 00:23:29.075 "small_bufsize": 8192, 00:23:29.075 "small_pool_count": 8192 00:23:29.075 } 00:23:29.075 } 00:23:29.075 ] 00:23:29.075 }, 00:23:29.075 { 00:23:29.075 "subsystem": "sock", 00:23:29.075 "config": [ 00:23:29.075 { 00:23:29.075 "method": "sock_set_default_impl", 00:23:29.075 "params": { 00:23:29.075 "impl_name": "posix" 00:23:29.075 } 00:23:29.075 }, 00:23:29.075 { 00:23:29.075 "method": "sock_impl_set_options", 00:23:29.075 "params": { 00:23:29.075 "enable_ktls": false, 00:23:29.075 "enable_placement_id": 0, 00:23:29.075 "enable_quickack": false, 00:23:29.075 "enable_recv_pipe": true, 00:23:29.075 "enable_zerocopy_send_client": false, 00:23:29.075 "enable_zerocopy_send_server": true, 00:23:29.075 "impl_name": "ssl", 00:23:29.075 "recv_buf_size": 4096, 00:23:29.075 "send_buf_size": 4096, 00:23:29.075 "tls_version": 0, 00:23:29.075 "zerocopy_threshold": 0 00:23:29.075 } 00:23:29.075 }, 00:23:29.075 { 00:23:29.075 "method": "sock_impl_set_options", 00:23:29.075 "params": { 00:23:29.075 "enable_ktls": false, 00:23:29.075 "enable_placement_id": 0, 00:23:29.075 "enable_quickack": false, 00:23:29.075 "enable_recv_pipe": true, 00:23:29.075 "enable_zerocopy_send_client": false, 00:23:29.075 "enable_zerocopy_send_server": true, 00:23:29.075 "impl_name": "posix", 00:23:29.075 "recv_buf_size": 2097152, 00:23:29.075 "send_buf_size": 2097152, 00:23:29.075 "tls_version": 0, 00:23:29.075 "zerocopy_threshold": 0 00:23:29.075 } 00:23:29.075 } 00:23:29.075 ] 00:23:29.075 }, 00:23:29.075 { 00:23:29.075 "subsystem": "vmd", 00:23:29.075 "config": [] 00:23:29.075 }, 00:23:29.075 { 00:23:29.075 "subsystem": "accel", 00:23:29.075 "config": [ 00:23:29.075 { 00:23:29.075 "method": "accel_set_options", 00:23:29.075 "params": { 00:23:29.075 "buf_count": 2048, 00:23:29.075 "large_cache_size": 16, 00:23:29.075 "sequence_count": 2048, 00:23:29.075 "small_cache_size": 128, 00:23:29.075 "task_count": 2048 00:23:29.075 } 00:23:29.075 } 00:23:29.075 ] 00:23:29.075 }, 00:23:29.075 { 00:23:29.075 "subsystem": "bdev", 00:23:29.075 "config": [ 00:23:29.075 { 00:23:29.075 "method": "bdev_set_options", 00:23:29.075 "params": { 00:23:29.075 "bdev_auto_examine": true, 00:23:29.075 "bdev_io_cache_size": 256, 00:23:29.075 "bdev_io_pool_size": 65535, 00:23:29.075 "iobuf_large_cache_size": 16, 00:23:29.075 "iobuf_small_cache_size": 128 00:23:29.075 } 00:23:29.075 }, 00:23:29.075 { 00:23:29.075 "method": "bdev_raid_set_options", 00:23:29.075 "params": { 00:23:29.075 "process_max_bandwidth_mb_sec": 0, 00:23:29.076 "process_window_size_kb": 1024 00:23:29.076 } 00:23:29.076 }, 00:23:29.076 { 00:23:29.076 "method": "bdev_iscsi_set_options", 00:23:29.076 "params": { 00:23:29.076 "timeout_sec": 30 00:23:29.076 } 00:23:29.076 }, 00:23:29.076 { 00:23:29.076 "method": "bdev_nvme_set_options", 00:23:29.076 "params": { 00:23:29.076 "action_on_timeout": "none", 00:23:29.076 "allow_accel_sequence": false, 00:23:29.076 "arbitration_burst": 0, 00:23:29.076 "bdev_retry_count": 3, 00:23:29.076 "ctrlr_loss_timeout_sec": 0, 00:23:29.076 "delay_cmd_submit": true, 00:23:29.076 "dhchap_dhgroups": [ 00:23:29.076 "null", 00:23:29.076 "ffdhe2048", 00:23:29.076 "ffdhe3072", 00:23:29.076 "ffdhe4096", 00:23:29.076 "ffdhe6144", 00:23:29.076 "ffdhe8192" 00:23:29.076 ], 00:23:29.076 "dhchap_digests": [ 00:23:29.076 "sha256", 00:23:29.076 "sha384", 00:23:29.076 "sha512" 00:23:29.076 ], 00:23:29.076 "disable_auto_failback": false, 00:23:29.076 "fast_io_fail_timeout_sec": 0, 00:23:29.076 "generate_uuids": false, 00:23:29.076 "high_priority_weight": 0, 00:23:29.076 "io_path_stat": false, 00:23:29.076 "io_queue_requests": 512, 00:23:29.076 "keep_alive_timeout_ms": 10000, 00:23:29.076 "low_priority_weight": 0, 00:23:29.076 "medium_priority_weight": 0, 00:23:29.076 "nvme_adminq_poll_period_us": 10000, 00:23:29.076 "nvme_error_stat": false, 00:23:29.076 "nvme_ioq_poll_period_us": 0, 00:23:29.076 "rdma_cm_event_timeout_ms": 0, 00:23:29.076 "rdma_max_cq_size": 0, 00:23:29.076 "rdma_srq_size": 0, 00:23:29.076 "reconnect_delay_sec": 0, 00:23:29.076 "timeout_admin_us": 0, 00:23:29.076 "timeout_us": 0, 00:23:29.076 "transport_ack_timeout": 0, 00:23:29.076 "transport_retry_count": 4, 00:23:29.076 "transport_tos": 0 00:23:29.076 } 00:23:29.076 }, 00:23:29.076 { 00:23:29.076 "method": "bdev_nvme_attach_controller", 00:23:29.076 "params": { 00:23:29.076 "adrfam": "IPv4", 00:23:29.076 "ctrlr_loss_timeout_sec": 0, 00:23:29.076 "ddgst": false, 00:23:29.076 "fast_io_fail_timeout_sec": 0, 00:23:29.076 "hdgst": false, 00:23:29.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.076 "multipath": "multipath", 00:23:29.076 "name": "nvme0", 00:23:29.076 "prchk_guard": false, 00:23:29.076 "prchk_reftag": false, 00:23:29.076 "psk": "key0", 00:23:29.076 "reconnect_delay_sec": 0, 00:23:29.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.076 "traddr": "10.0.0.3", 00:23:29.076 "trsvcid": "4420", 00:23:29.076 "trtype": "TCP" 00:23:29.076 } 00:23:29.076 }, 00:23:29.076 { 00:23:29.076 "method": "bdev_nvme_set_hotplug", 00:23:29.076 "params": { 00:23:29.076 "enable": false, 00:23:29.076 "period_us": 100000 00:23:29.076 } 00:23:29.076 }, 00:23:29.076 { 00:23:29.076 "method": "bdev_enable_histogram", 00:23:29.076 "params": { 00:23:29.076 "enable": true, 00:23:29.076 "name": "nvme0n1" 00:23:29.076 } 00:23:29.076 }, 00:23:29.076 { 00:23:29.076 "method": "bdev_wait_for_examine" 00:23:29.076 } 00:23:29.076 ] 00:23:29.076 }, 00:23:29.076 { 00:23:29.076 "subsystem": "nbd", 00:23:29.076 "config": [] 00:23:29.076 } 00:23:29.076 ] 00:23:29.076 }' 00:23:29.076 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.334 [2024-12-05 09:55:16.707190] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:23:29.334 [2024-12-05 09:55:16.707394] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91200 ] 00:23:29.335 [2024-12-05 09:55:16.898682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.593 [2024-12-05 09:55:17.046476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.855 [2024-12-05 09:55:17.403636] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.113 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.114 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:30.114 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:30.114 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:30.372 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.372 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:30.372 Running I/O for 1 seconds... 00:23:31.749 3182.00 IOPS, 12.43 MiB/s 00:23:31.749 Latency(us) 00:23:31.749 [2024-12-05T09:55:19.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.749 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:31.749 Verification LBA range: start 0x0 length 0x2000 00:23:31.749 nvme0n1 : 1.02 3240.02 12.66 0.00 0.00 39144.23 7387.69 33363.78 00:23:31.749 [2024-12-05T09:55:19.378Z] =================================================================================================================== 00:23:31.749 [2024-12-05T09:55:19.378Z] Total : 3240.02 12.66 0.00 0.00 39144.23 7387.69 33363.78 00:23:31.749 { 00:23:31.749 "results": [ 00:23:31.749 { 00:23:31.749 "job": "nvme0n1", 00:23:31.749 "core_mask": "0x2", 00:23:31.749 "workload": "verify", 00:23:31.749 "status": "finished", 00:23:31.749 "verify_range": { 00:23:31.749 "start": 0, 00:23:31.749 "length": 8192 00:23:31.749 }, 00:23:31.749 "queue_depth": 128, 00:23:31.749 "io_size": 4096, 00:23:31.749 "runtime": 1.0216, 00:23:31.749 "iops": 3240.015661707126, 00:23:31.749 "mibps": 12.656311178543461, 00:23:31.749 "io_failed": 0, 00:23:31.749 "io_timeout": 0, 00:23:31.749 "avg_latency_us": 39144.23341279868, 00:23:31.749 "min_latency_us": 7387.694545454546, 00:23:31.749 "max_latency_us": 33363.781818181815 00:23:31.749 } 00:23:31.749 ], 00:23:31.749 "core_count": 1 00:23:31.749 } 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:31.749 nvmf_trace.0 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 91200 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 91200 ']' 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 91200 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91200 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:31.749 killing process with pid 91200 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91200' 00:23:31.749 Received shutdown signal, test time was about 1.000000 seconds 00:23:31.749 00:23:31.749 Latency(us) 00:23:31.749 [2024-12-05T09:55:19.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.749 [2024-12-05T09:55:19.378Z] =================================================================================================================== 00:23:31.749 [2024-12-05T09:55:19.378Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 91200 00:23:31.749 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 91200 00:23:32.684 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:32.684 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:32.684 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:32.684 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:32.684 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:32.684 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:32.684 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:32.684 rmmod nvme_tcp 00:23:32.684 rmmod nvme_fabrics 00:23:32.684 rmmod nvme_keyring 00:23:32.684 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:32.684 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:32.684 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:32.684 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 91157 ']' 00:23:32.684 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 91157 00:23:32.684 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 91157 ']' 00:23:32.684 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 91157 00:23:32.684 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:32.684 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.684 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91157 00:23:32.685 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.685 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.685 killing process with pid 91157 00:23:32.685 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91157' 00:23:32.685 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 91157 00:23:32.685 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 91157 00:23:33.620 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:33.620 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:33.620 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:33.620 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:33.620 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:33.620 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:33.620 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:33.620 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:33.620 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:33.620 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:33.620 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:33.620 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:33.620 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.tzOfF2Jkqy /tmp/tmp.Cb9PmrPLax /tmp/tmp.Dd3IU6Mn4R 00:23:33.878 00:23:33.878 real 1m45.319s 00:23:33.878 user 2m47.502s 00:23:33.878 sys 0m31.181s 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.878 ************************************ 00:23:33.878 END TEST nvmf_tls 00:23:33.878 ************************************ 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:33.878 09:55:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:33.878 ************************************ 00:23:33.878 START TEST nvmf_fips 00:23:33.878 ************************************ 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:34.138 * Looking for test storage... 00:23:34.138 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:34.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.138 --rc genhtml_branch_coverage=1 00:23:34.138 --rc genhtml_function_coverage=1 00:23:34.138 --rc genhtml_legend=1 00:23:34.138 --rc geninfo_all_blocks=1 00:23:34.138 --rc geninfo_unexecuted_blocks=1 00:23:34.138 00:23:34.138 ' 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:34.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.138 --rc genhtml_branch_coverage=1 00:23:34.138 --rc genhtml_function_coverage=1 00:23:34.138 --rc genhtml_legend=1 00:23:34.138 --rc geninfo_all_blocks=1 00:23:34.138 --rc geninfo_unexecuted_blocks=1 00:23:34.138 00:23:34.138 ' 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:34.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.138 --rc genhtml_branch_coverage=1 00:23:34.138 --rc genhtml_function_coverage=1 00:23:34.138 --rc genhtml_legend=1 00:23:34.138 --rc geninfo_all_blocks=1 00:23:34.138 --rc geninfo_unexecuted_blocks=1 00:23:34.138 00:23:34.138 ' 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:34.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.138 --rc genhtml_branch_coverage=1 00:23:34.138 --rc genhtml_function_coverage=1 00:23:34.138 --rc genhtml_legend=1 00:23:34.138 --rc geninfo_all_blocks=1 00:23:34.138 --rc geninfo_unexecuted_blocks=1 00:23:34.138 00:23:34.138 ' 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.138 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:34.139 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:34.139 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:34.399 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:34.399 Error setting digest 00:23:34.400 40D221EBAA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:34.400 40D221EBAA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:34.400 Cannot find device "nvmf_init_br" 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:34.400 Cannot find device "nvmf_init_br2" 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:34.400 Cannot find device "nvmf_tgt_br" 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:34.400 Cannot find device "nvmf_tgt_br2" 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:34.400 Cannot find device "nvmf_init_br" 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:34.400 Cannot find device "nvmf_init_br2" 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:34.400 Cannot find device "nvmf_tgt_br" 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:34.400 Cannot find device "nvmf_tgt_br2" 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:23:34.400 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:34.400 Cannot find device "nvmf_br" 00:23:34.400 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:23:34.400 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:34.400 Cannot find device "nvmf_init_if" 00:23:34.400 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:23:34.400 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:34.659 Cannot find device "nvmf_init_if2" 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:34.659 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:34.659 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:34.659 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:34.660 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:34.660 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:34.660 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:34.660 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:34.918 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:34.918 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:34.918 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:34.918 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:34.919 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:34.919 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:23:34.919 00:23:34.919 --- 10.0.0.3 ping statistics --- 00:23:34.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.919 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:34.919 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:34.919 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.117 ms 00:23:34.919 00:23:34.919 --- 10.0.0.4 ping statistics --- 00:23:34.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.919 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:34.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:23:34.919 00:23:34.919 --- 10.0.0.1 ping statistics --- 00:23:34.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.919 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:34.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:23:34.919 00:23:34.919 --- 10.0.0.2 ping statistics --- 00:23:34.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.919 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=91549 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 91549 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 91549 ']' 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.919 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:35.178 [2024-12-05 09:55:22.556050] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:23:35.178 [2024-12-05 09:55:22.556550] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.178 [2024-12-05 09:55:22.747537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.436 [2024-12-05 09:55:22.902397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.436 [2024-12-05 09:55:22.902480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.436 [2024-12-05 09:55:22.902507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.436 [2024-12-05 09:55:22.902525] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.436 [2024-12-05 09:55:22.902543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.436 [2024-12-05 09:55:22.904334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.004 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.004 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:36.004 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:36.004 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:36.004 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:36.004 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.004 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:36.004 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:36.004 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:36.004 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Wn8 00:23:36.004 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:36.004 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Wn8 00:23:36.004 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Wn8 00:23:36.004 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Wn8 00:23:36.004 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:36.263 [2024-12-05 09:55:23.887246] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.522 [2024-12-05 09:55:23.903171] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:36.522 [2024-12-05 09:55:23.903428] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:36.522 malloc0 00:23:36.522 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:36.522 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=91603 00:23:36.522 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:36.522 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 91603 /var/tmp/bdevperf.sock 00:23:36.522 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 91603 ']' 00:23:36.522 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.522 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.522 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.522 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.522 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:36.780 [2024-12-05 09:55:24.161225] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:23:36.780 [2024-12-05 09:55:24.161627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91603 ] 00:23:36.780 [2024-12-05 09:55:24.340263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.038 [2024-12-05 09:55:24.444231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.607 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.607 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:37.607 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Wn8 00:23:37.607 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:37.864 [2024-12-05 09:55:25.371560] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:37.864 TLSTESTn1 00:23:37.864 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:38.122 Running I/O for 10 seconds... 00:23:40.025 3444.00 IOPS, 13.45 MiB/s [2024-12-05T09:55:28.589Z] 3487.50 IOPS, 13.62 MiB/s [2024-12-05T09:55:29.962Z] 3523.33 IOPS, 13.76 MiB/s [2024-12-05T09:55:30.894Z] 3540.50 IOPS, 13.83 MiB/s [2024-12-05T09:55:31.825Z] 3546.00 IOPS, 13.85 MiB/s [2024-12-05T09:55:32.758Z] 3558.33 IOPS, 13.90 MiB/s [2024-12-05T09:55:33.693Z] 3560.71 IOPS, 13.91 MiB/s [2024-12-05T09:55:34.628Z] 3565.25 IOPS, 13.93 MiB/s [2024-12-05T09:55:36.016Z] 3567.22 IOPS, 13.93 MiB/s [2024-12-05T09:55:36.016Z] 3569.40 IOPS, 13.94 MiB/s 00:23:48.387 Latency(us) 00:23:48.387 [2024-12-05T09:55:36.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.387 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:48.387 Verification LBA range: start 0x0 length 0x2000 00:23:48.387 TLSTESTn1 : 10.03 3572.43 13.95 0.00 0.00 35755.15 9889.98 28120.90 00:23:48.387 [2024-12-05T09:55:36.016Z] =================================================================================================================== 00:23:48.387 [2024-12-05T09:55:36.016Z] Total : 3572.43 13.95 0.00 0.00 35755.15 9889.98 28120.90 00:23:48.387 { 00:23:48.387 "results": [ 00:23:48.387 { 00:23:48.387 "job": "TLSTESTn1", 00:23:48.387 "core_mask": "0x4", 00:23:48.387 "workload": "verify", 00:23:48.387 "status": "finished", 00:23:48.387 "verify_range": { 00:23:48.387 "start": 0, 00:23:48.387 "length": 8192 00:23:48.387 }, 00:23:48.387 "queue_depth": 128, 00:23:48.387 "io_size": 4096, 00:23:48.387 "runtime": 10.027338, 00:23:48.387 "iops": 3572.4336808034195, 00:23:48.387 "mibps": 13.954819065638358, 00:23:48.387 "io_failed": 0, 00:23:48.387 "io_timeout": 0, 00:23:48.387 "avg_latency_us": 35755.14844975916, 00:23:48.387 "min_latency_us": 9889.978181818182, 00:23:48.387 "max_latency_us": 28120.901818181817 00:23:48.387 } 00:23:48.387 ], 00:23:48.387 "core_count": 1 00:23:48.387 } 00:23:48.387 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:48.387 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:48.387 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:23:48.387 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:23:48.387 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:48.387 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:48.387 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:48.387 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:48.387 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:48.387 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:48.387 nvmf_trace.0 00:23:48.387 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:23:48.387 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 91603 00:23:48.387 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 91603 ']' 00:23:48.387 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 91603 00:23:48.387 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:48.387 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:48.387 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91603 00:23:48.387 killing process with pid 91603 00:23:48.387 Received shutdown signal, test time was about 10.000000 seconds 00:23:48.387 00:23:48.387 Latency(us) 00:23:48.387 [2024-12-05T09:55:36.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.387 [2024-12-05T09:55:36.016Z] =================================================================================================================== 00:23:48.387 [2024-12-05T09:55:36.016Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.387 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:48.387 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:48.388 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91603' 00:23:48.388 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 91603 00:23:48.388 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 91603 00:23:49.325 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:49.325 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:49.325 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:23:49.325 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:49.325 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:23:49.325 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:49.325 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:49.325 rmmod nvme_tcp 00:23:49.325 rmmod nvme_fabrics 00:23:49.325 rmmod nvme_keyring 00:23:49.325 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:49.325 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:23:49.325 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:23:49.325 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 91549 ']' 00:23:49.325 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 91549 00:23:49.325 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 91549 ']' 00:23:49.325 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 91549 00:23:49.325 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:49.326 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.326 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91549 00:23:49.326 killing process with pid 91549 00:23:49.326 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:49.326 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:49.326 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91549' 00:23:49.326 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 91549 00:23:49.326 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 91549 00:23:50.262 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:50.262 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:50.262 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:50.262 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:23:50.262 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:23:50.262 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:50.262 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:23:50.262 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:50.262 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:50.262 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:50.262 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:50.262 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:50.262 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:50.262 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:50.522 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:50.522 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:50.522 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:50.522 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:50.522 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:50.522 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:50.522 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:50.522 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:50.522 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:50.522 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.522 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.522 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.522 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:23:50.522 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Wn8 00:23:50.522 ************************************ 00:23:50.522 END TEST nvmf_fips 00:23:50.522 ************************************ 00:23:50.522 00:23:50.522 real 0m16.562s 00:23:50.522 user 0m22.677s 00:23:50.522 sys 0m6.193s 00:23:50.522 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:50.522 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:50.522 09:55:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:50.522 09:55:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:50.522 09:55:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:50.522 09:55:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:50.522 ************************************ 00:23:50.522 START TEST nvmf_control_msg_list 00:23:50.522 ************************************ 00:23:50.522 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:50.781 * Looking for test storage... 00:23:50.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:50.781 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:50.781 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:23:50.781 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:50.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.782 --rc genhtml_branch_coverage=1 00:23:50.782 --rc genhtml_function_coverage=1 00:23:50.782 --rc genhtml_legend=1 00:23:50.782 --rc geninfo_all_blocks=1 00:23:50.782 --rc geninfo_unexecuted_blocks=1 00:23:50.782 00:23:50.782 ' 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:50.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.782 --rc genhtml_branch_coverage=1 00:23:50.782 --rc genhtml_function_coverage=1 00:23:50.782 --rc genhtml_legend=1 00:23:50.782 --rc geninfo_all_blocks=1 00:23:50.782 --rc geninfo_unexecuted_blocks=1 00:23:50.782 00:23:50.782 ' 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:50.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.782 --rc genhtml_branch_coverage=1 00:23:50.782 --rc genhtml_function_coverage=1 00:23:50.782 --rc genhtml_legend=1 00:23:50.782 --rc geninfo_all_blocks=1 00:23:50.782 --rc geninfo_unexecuted_blocks=1 00:23:50.782 00:23:50.782 ' 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:50.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.782 --rc genhtml_branch_coverage=1 00:23:50.782 --rc genhtml_function_coverage=1 00:23:50.782 --rc genhtml_legend=1 00:23:50.782 --rc geninfo_all_blocks=1 00:23:50.782 --rc geninfo_unexecuted_blocks=1 00:23:50.782 00:23:50.782 ' 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:50.782 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:50.782 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:50.783 Cannot find device "nvmf_init_br" 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:50.783 Cannot find device "nvmf_init_br2" 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:50.783 Cannot find device "nvmf_tgt_br" 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:50.783 Cannot find device "nvmf_tgt_br2" 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:23:50.783 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:51.043 Cannot find device "nvmf_init_br" 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:51.043 Cannot find device "nvmf_init_br2" 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:51.043 Cannot find device "nvmf_tgt_br" 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:51.043 Cannot find device "nvmf_tgt_br2" 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:51.043 Cannot find device "nvmf_br" 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:51.043 Cannot find device "nvmf_init_if" 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:51.043 Cannot find device "nvmf_init_if2" 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:51.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:51.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:51.043 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:51.303 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:51.303 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:23:51.303 00:23:51.303 --- 10.0.0.3 ping statistics --- 00:23:51.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.303 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:51.303 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:51.303 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:23:51.303 00:23:51.303 --- 10.0.0.4 ping statistics --- 00:23:51.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.303 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:51.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:23:51.303 00:23:51.303 --- 10.0.0.1 ping statistics --- 00:23:51.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.303 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:51.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:23:51.303 00:23:51.303 --- 10.0.0.2 ping statistics --- 00:23:51.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.303 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=92043 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 92043 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 92043 ']' 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.303 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.304 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.304 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.304 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:51.304 [2024-12-05 09:55:38.901356] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:23:51.304 [2024-12-05 09:55:38.901546] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.563 [2024-12-05 09:55:39.097248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.822 [2024-12-05 09:55:39.260220] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.822 [2024-12-05 09:55:39.260308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.822 [2024-12-05 09:55:39.260335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.822 [2024-12-05 09:55:39.260372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.822 [2024-12-05 09:55:39.260392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.822 [2024-12-05 09:55:39.262033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:52.391 [2024-12-05 09:55:39.816898] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:52.391 Malloc0 00:23:52.391 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.392 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:52.392 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.392 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:52.392 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.392 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:52.392 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.392 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:52.392 [2024-12-05 09:55:39.880218] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:52.392 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.392 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=92093 00:23:52.392 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:52.392 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=92094 00:23:52.392 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:52.392 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=92095 00:23:52.392 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:52.392 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 92093 00:23:52.652 [2024-12-05 09:55:40.099170] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:52.652 [2024-12-05 09:55:40.128785] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:52.652 [2024-12-05 09:55:40.139494] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:53.589 Initializing NVMe Controllers 00:23:53.589 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:23:53.589 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:53.589 Initialization complete. Launching workers. 00:23:53.589 ======================================================== 00:23:53.589 Latency(us) 00:23:53.589 Device Information : IOPS MiB/s Average min max 00:23:53.589 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2893.00 11.30 345.25 173.61 1499.21 00:23:53.589 ======================================================== 00:23:53.589 Total : 2893.00 11.30 345.25 173.61 1499.21 00:23:53.589 00:23:53.589 Initializing NVMe Controllers 00:23:53.589 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:23:53.589 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:53.589 Initialization complete. Launching workers. 00:23:53.589 ======================================================== 00:23:53.589 Latency(us) 00:23:53.589 Device Information : IOPS MiB/s Average min max 00:23:53.589 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2868.00 11.20 348.24 201.71 913.60 00:23:53.589 ======================================================== 00:23:53.589 Total : 2868.00 11.20 348.24 201.71 913.60 00:23:53.589 00:23:53.589 Initializing NVMe Controllers 00:23:53.589 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:23:53.589 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:53.589 Initialization complete. Launching workers. 00:23:53.589 ======================================================== 00:23:53.589 Latency(us) 00:23:53.589 Device Information : IOPS MiB/s Average min max 00:23:53.589 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2906.00 11.35 343.69 155.80 669.59 00:23:53.589 ======================================================== 00:23:53.589 Total : 2906.00 11.35 343.69 155.80 669.59 00:23:53.589 00:23:53.589 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 92094 00:23:53.589 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 92095 00:23:53.589 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:53.589 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:53.589 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:53.589 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:53.847 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:53.847 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:53.847 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:53.848 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:53.848 rmmod nvme_tcp 00:23:53.848 rmmod nvme_fabrics 00:23:53.848 rmmod nvme_keyring 00:23:53.848 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:53.848 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:53.848 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:53.848 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 92043 ']' 00:23:53.848 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 92043 00:23:53.848 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 92043 ']' 00:23:53.848 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 92043 00:23:53.848 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:23:53.848 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.848 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92043 00:23:53.848 killing process with pid 92043 00:23:53.848 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:53.848 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:53.848 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92043' 00:23:53.848 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 92043 00:23:53.848 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 92043 00:23:54.782 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:54.782 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:54.782 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:54.782 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:54.782 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:23:54.782 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:54.782 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:23:54.782 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:54.782 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:54.782 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:55.040 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:55.040 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:55.040 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:55.040 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:55.040 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:55.040 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:55.040 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:55.040 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:55.040 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:55.040 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:55.040 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:55.040 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:55.040 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:55.040 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.040 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.040 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.040 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:23:55.040 00:23:55.040 real 0m4.511s 00:23:55.040 user 0m6.294s 00:23:55.040 sys 0m1.798s 00:23:55.040 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.040 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:55.040 ************************************ 00:23:55.040 END TEST nvmf_control_msg_list 00:23:55.040 ************************************ 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:55.300 ************************************ 00:23:55.300 START TEST nvmf_wait_for_buf 00:23:55.300 ************************************ 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:55.300 * Looking for test storage... 00:23:55.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:55.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.300 --rc genhtml_branch_coverage=1 00:23:55.300 --rc genhtml_function_coverage=1 00:23:55.300 --rc genhtml_legend=1 00:23:55.300 --rc geninfo_all_blocks=1 00:23:55.300 --rc geninfo_unexecuted_blocks=1 00:23:55.300 00:23:55.300 ' 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:55.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.300 --rc genhtml_branch_coverage=1 00:23:55.300 --rc genhtml_function_coverage=1 00:23:55.300 --rc genhtml_legend=1 00:23:55.300 --rc geninfo_all_blocks=1 00:23:55.300 --rc geninfo_unexecuted_blocks=1 00:23:55.300 00:23:55.300 ' 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:55.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.300 --rc genhtml_branch_coverage=1 00:23:55.300 --rc genhtml_function_coverage=1 00:23:55.300 --rc genhtml_legend=1 00:23:55.300 --rc geninfo_all_blocks=1 00:23:55.300 --rc geninfo_unexecuted_blocks=1 00:23:55.300 00:23:55.300 ' 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:55.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.300 --rc genhtml_branch_coverage=1 00:23:55.300 --rc genhtml_function_coverage=1 00:23:55.300 --rc genhtml_legend=1 00:23:55.300 --rc geninfo_all_blocks=1 00:23:55.300 --rc geninfo_unexecuted_blocks=1 00:23:55.300 00:23:55.300 ' 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:55.300 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:55.301 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.301 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:55.560 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:55.561 Cannot find device "nvmf_init_br" 00:23:55.561 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:23:55.561 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:55.561 Cannot find device "nvmf_init_br2" 00:23:55.561 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:23:55.561 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:55.561 Cannot find device "nvmf_tgt_br" 00:23:55.561 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:23:55.561 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:55.561 Cannot find device "nvmf_tgt_br2" 00:23:55.561 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:23:55.561 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:55.561 Cannot find device "nvmf_init_br" 00:23:55.561 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:23:55.561 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:55.561 Cannot find device "nvmf_init_br2" 00:23:55.561 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:55.561 Cannot find device "nvmf_tgt_br" 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:55.561 Cannot find device "nvmf_tgt_br2" 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:55.561 Cannot find device "nvmf_br" 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:55.561 Cannot find device "nvmf_init_if" 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:55.561 Cannot find device "nvmf_init_if2" 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:55.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:55.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:55.561 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:55.821 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:55.821 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:23:55.821 00:23:55.821 --- 10.0.0.3 ping statistics --- 00:23:55.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.821 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:55.821 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:55.821 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:23:55.821 00:23:55.821 --- 10.0.0.4 ping statistics --- 00:23:55.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.821 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:55.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:23:55.821 00:23:55.821 --- 10.0.0.1 ping statistics --- 00:23:55.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.821 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:55.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:23:55.821 00:23:55.821 --- 10.0.0.2 ping statistics --- 00:23:55.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.821 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:23:55.821 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:55.822 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.822 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:55.822 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:55.822 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.822 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:55.822 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:55.822 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:23:55.822 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:55.822 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.822 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:55.822 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=92339 00:23:55.822 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 92339 00:23:55.822 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:55.822 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 92339 ']' 00:23:55.822 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.822 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.822 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.822 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.822 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:56.081 [2024-12-05 09:55:43.505477] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:23:56.081 [2024-12-05 09:55:43.505660] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.081 [2024-12-05 09:55:43.702412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.340 [2024-12-05 09:55:43.863842] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.340 [2024-12-05 09:55:43.863948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.340 [2024-12-05 09:55:43.863978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.340 [2024-12-05 09:55:43.864019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.340 [2024-12-05 09:55:43.864041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.340 [2024-12-05 09:55:43.865646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.908 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.908 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:23:56.908 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:56.908 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:56.908 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:56.908 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.908 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:56.908 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:23:56.908 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:23:56.908 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.908 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:56.908 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.908 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:23:56.908 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.908 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:56.908 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.908 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:23:56.908 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.908 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.167 Malloc0 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.167 [2024-12-05 09:55:44.744034] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:57.167 [2024-12-05 09:55:44.768267] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.167 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:57.426 [2024-12-05 09:55:45.027138] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:58.804 Initializing NVMe Controllers 00:23:58.804 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:23:58.804 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:23:58.804 Initialization complete. Launching workers. 00:23:58.804 ======================================================== 00:23:58.804 Latency(us) 00:23:58.804 Device Information : IOPS MiB/s Average min max 00:23:58.804 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 128.00 16.00 32524.28 7950.32 64525.53 00:23:58.804 ======================================================== 00:23:58.804 Total : 128.00 16.00 32524.28 7950.32 64525.53 00:23:58.804 00:23:58.804 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:23:58.804 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.804 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:23:58.804 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:58.804 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:59.063 rmmod nvme_tcp 00:23:59.063 rmmod nvme_fabrics 00:23:59.063 rmmod nvme_keyring 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 92339 ']' 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 92339 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 92339 ']' 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 92339 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92339 00:23:59.063 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:59.063 killing process with pid 92339 00:23:59.064 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:59.064 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92339' 00:23:59.064 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 92339 00:23:59.064 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 92339 00:24:00.000 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:00.000 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:00.000 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:00.001 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:00.001 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:00.001 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:00.001 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:00.001 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:00.001 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:00.001 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:00.001 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:00.001 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:00.001 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:00.001 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:00.001 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:00.001 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:00.001 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:00.001 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:00.260 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:00.260 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:00.260 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:00.260 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:00.260 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:00.260 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.260 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.260 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.260 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:24:00.260 00:24:00.260 real 0m5.047s 00:24:00.260 user 0m4.367s 00:24:00.260 sys 0m1.042s 00:24:00.260 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:00.260 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:00.260 ************************************ 00:24:00.260 END TEST nvmf_wait_for_buf 00:24:00.260 ************************************ 00:24:00.260 09:55:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:00.260 09:55:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:00.260 09:55:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:00.260 09:55:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:00.260 09:55:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:00.260 ************************************ 00:24:00.260 START TEST nvmf_fuzz 00:24:00.260 ************************************ 00:24:00.260 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:00.260 * Looking for test storage... 00:24:00.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:00.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.520 --rc genhtml_branch_coverage=1 00:24:00.520 --rc genhtml_function_coverage=1 00:24:00.520 --rc genhtml_legend=1 00:24:00.520 --rc geninfo_all_blocks=1 00:24:00.520 --rc geninfo_unexecuted_blocks=1 00:24:00.520 00:24:00.520 ' 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:00.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.520 --rc genhtml_branch_coverage=1 00:24:00.520 --rc genhtml_function_coverage=1 00:24:00.520 --rc genhtml_legend=1 00:24:00.520 --rc geninfo_all_blocks=1 00:24:00.520 --rc geninfo_unexecuted_blocks=1 00:24:00.520 00:24:00.520 ' 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:00.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.520 --rc genhtml_branch_coverage=1 00:24:00.520 --rc genhtml_function_coverage=1 00:24:00.520 --rc genhtml_legend=1 00:24:00.520 --rc geninfo_all_blocks=1 00:24:00.520 --rc geninfo_unexecuted_blocks=1 00:24:00.520 00:24:00.520 ' 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:00.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.520 --rc genhtml_branch_coverage=1 00:24:00.520 --rc genhtml_function_coverage=1 00:24:00.520 --rc genhtml_legend=1 00:24:00.520 --rc geninfo_all_blocks=1 00:24:00.520 --rc geninfo_unexecuted_blocks=1 00:24:00.520 00:24:00.520 ' 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.520 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:00.521 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:00.521 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:00.521 Cannot find device "nvmf_init_br" 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:00.521 Cannot find device "nvmf_init_br2" 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:00.521 Cannot find device "nvmf_tgt_br" 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:00.521 Cannot find device "nvmf_tgt_br2" 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:00.521 Cannot find device "nvmf_init_br" 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:00.521 Cannot find device "nvmf_init_br2" 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:00.521 Cannot find device "nvmf_tgt_br" 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:00.521 Cannot find device "nvmf_tgt_br2" 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:00.521 Cannot find device "nvmf_br" 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:00.521 Cannot find device "nvmf_init_if" 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:00.521 Cannot find device "nvmf_init_if2" 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:24:00.521 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:00.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:00.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:00.781 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:00.782 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:00.782 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:24:00.782 00:24:00.782 --- 10.0.0.3 ping statistics --- 00:24:00.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.782 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:00.782 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:00.782 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:24:00.782 00:24:00.782 --- 10.0.0.4 ping statistics --- 00:24:00.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.782 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:00.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:24:00.782 00:24:00.782 --- 10.0.0.1 ping statistics --- 00:24:00.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.782 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:00.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:24:00.782 00:24:00.782 --- 10.0.0.2 ping statistics --- 00:24:00.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.782 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=92670 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 92670 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 92670 ']' 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.782 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.717 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.717 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:01.717 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:01.717 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.717 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.975 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.975 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:01.975 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.975 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.975 Malloc0 00:24:01.975 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.975 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.975 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.975 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.975 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.975 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:01.975 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.975 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.975 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.975 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:01.975 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.975 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.975 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.975 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:24:01.975 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:24:02.542 Shutting down the fuzz application 00:24:02.543 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:03.110 Shutting down the fuzz application 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.110 rmmod nvme_tcp 00:24:03.110 rmmod nvme_fabrics 00:24:03.110 rmmod nvme_keyring 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 92670 ']' 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 92670 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 92670 ']' 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 92670 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92670 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:03.110 killing process with pid 92670 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92670' 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 92670 00:24:03.110 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 92670 00:24:04.487 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:04.487 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:04.487 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:04.487 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:24:04.487 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:24:04.488 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:04.488 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.488 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.488 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:04.488 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:04.488 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:04.488 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:04.488 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:04.488 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:04.488 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:04.488 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:04.488 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:04.488 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:04.488 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:04.488 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:04.488 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:04.488 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:04.488 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:04.488 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.488 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.488 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.488 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:24:04.488 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:24:04.488 00:24:04.488 real 0m4.289s 00:24:04.488 user 0m4.313s 00:24:04.488 sys 0m1.052s 00:24:04.488 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.488 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:04.488 ************************************ 00:24:04.488 END TEST nvmf_fuzz 00:24:04.488 ************************************ 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:04.748 ************************************ 00:24:04.748 START TEST nvmf_multiconnection 00:24:04.748 ************************************ 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:04.748 * Looking for test storage... 00:24:04.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:04.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.748 --rc genhtml_branch_coverage=1 00:24:04.748 --rc genhtml_function_coverage=1 00:24:04.748 --rc genhtml_legend=1 00:24:04.748 --rc geninfo_all_blocks=1 00:24:04.748 --rc geninfo_unexecuted_blocks=1 00:24:04.748 00:24:04.748 ' 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:04.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.748 --rc genhtml_branch_coverage=1 00:24:04.748 --rc genhtml_function_coverage=1 00:24:04.748 --rc genhtml_legend=1 00:24:04.748 --rc geninfo_all_blocks=1 00:24:04.748 --rc geninfo_unexecuted_blocks=1 00:24:04.748 00:24:04.748 ' 00:24:04.748 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:04.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.748 --rc genhtml_branch_coverage=1 00:24:04.748 --rc genhtml_function_coverage=1 00:24:04.748 --rc genhtml_legend=1 00:24:04.748 --rc geninfo_all_blocks=1 00:24:04.748 --rc geninfo_unexecuted_blocks=1 00:24:04.749 00:24:04.749 ' 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:04.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.749 --rc genhtml_branch_coverage=1 00:24:04.749 --rc genhtml_function_coverage=1 00:24:04.749 --rc genhtml_legend=1 00:24:04.749 --rc geninfo_all_blocks=1 00:24:04.749 --rc geninfo_unexecuted_blocks=1 00:24:04.749 00:24:04.749 ' 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:04.749 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:04.749 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:05.008 Cannot find device "nvmf_init_br" 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:05.008 Cannot find device "nvmf_init_br2" 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:05.008 Cannot find device "nvmf_tgt_br" 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:05.008 Cannot find device "nvmf_tgt_br2" 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:05.008 Cannot find device "nvmf_init_br" 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:05.008 Cannot find device "nvmf_init_br2" 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:05.008 Cannot find device "nvmf_tgt_br" 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:05.008 Cannot find device "nvmf_tgt_br2" 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:05.008 Cannot find device "nvmf_br" 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:05.008 Cannot find device "nvmf_init_if" 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:24:05.008 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:05.008 Cannot find device "nvmf_init_if2" 00:24:05.009 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:24:05.009 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:05.009 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:05.009 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:24:05.009 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:05.009 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:05.009 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:24:05.009 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:05.009 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:05.009 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:05.009 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:05.009 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:05.009 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:05.009 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:05.009 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:05.009 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:05.009 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:05.009 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:05.268 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:05.268 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:24:05.268 00:24:05.268 --- 10.0.0.3 ping statistics --- 00:24:05.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.268 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:05.268 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:05.268 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:24:05.268 00:24:05.268 --- 10.0.0.4 ping statistics --- 00:24:05.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.268 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:05.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:24:05.268 00:24:05.268 --- 10.0.0.1 ping statistics --- 00:24:05.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.268 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:05.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:24:05.268 00:24:05.268 --- 10.0.0.2 ping statistics --- 00:24:05.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.268 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=92950 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 92950 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 92950 ']' 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.268 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:05.530 [2024-12-05 09:55:52.973481] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:24:05.530 [2024-12-05 09:55:52.973674] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.814 [2024-12-05 09:55:53.158796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:05.814 [2024-12-05 09:55:53.280073] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.814 [2024-12-05 09:55:53.280137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.814 [2024-12-05 09:55:53.280155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.814 [2024-12-05 09:55:53.280166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.814 [2024-12-05 09:55:53.280180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.814 [2024-12-05 09:55:53.282265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.814 [2024-12-05 09:55:53.282380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.814 [2024-12-05 09:55:53.283786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:05.814 [2024-12-05 09:55:53.283818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.400 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.400 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:24:06.400 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:06.400 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:06.400 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.400 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.400 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:06.400 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.400 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.400 [2024-12-05 09:55:54.001451] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.400 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.400 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:06.400 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:06.400 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:06.400 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.400 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.660 Malloc1 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.660 [2024-12-05 09:55:54.125816] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.660 Malloc2 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.660 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.920 Malloc3 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.920 Malloc4 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.920 Malloc5 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.920 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.180 Malloc6 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.180 Malloc7 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.180 Malloc8 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.180 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.440 Malloc9 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.440 Malloc10 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.440 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.440 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.440 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:07.440 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.440 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.440 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.440 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:24:07.440 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.440 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.440 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.440 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.440 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:07.440 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.440 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.698 Malloc11 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:07.698 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:10.227 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:10.227 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:10.227 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:24:10.227 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:10.227 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:10.227 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:10.227 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.227 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:24:10.227 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:10.227 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:10.227 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:10.227 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:10.227 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:12.131 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:12.131 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:12.131 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:24:12.131 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:12.131 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:12.131 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:12.131 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:12.131 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:24:12.131 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:12.131 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:12.131 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:12.131 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:12.131 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:14.664 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:14.664 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:14.664 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:24:14.664 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:14.664 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:14.664 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:14.664 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:14.664 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:24:14.664 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:14.664 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:14.664 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:14.664 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:14.664 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:16.564 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:16.564 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:24:16.564 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:16.564 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:16.564 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:16.564 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:16.564 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.564 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:24:16.564 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:16.564 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:16.564 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:16.564 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:16.564 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:19.094 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:19.094 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:19.094 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:24:19.094 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:19.094 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:19.094 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:19.094 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.094 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:24:19.094 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:19.094 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:19.094 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:19.094 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:19.094 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:20.995 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:20.995 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:20.995 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:24:20.995 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:20.995 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:20.995 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:20.995 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.995 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:24:20.995 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:20.995 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:20.995 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:20.995 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:20.995 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:23.527 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:23.528 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:23.528 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:24:23.528 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:23.528 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:23.528 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:23.528 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.528 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:24:23.528 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:23.528 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:23.528 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:23.528 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:23.528 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:25.428 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:25.428 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:25.428 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:24:25.428 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:25.428 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:25.428 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:25.428 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:25.428 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:24:25.428 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:25.428 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:25.428 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:25.428 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:25.428 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:27.961 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:27.961 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:27.961 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:24:27.961 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:27.961 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:27.961 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:27.961 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.961 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:24:27.961 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:27.961 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:27.961 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:27.961 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:27.961 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:29.861 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:29.861 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:24:29.861 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:29.861 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:29.861 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:29.861 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:29.861 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:29.861 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:24:29.861 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:29.861 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:29.861 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:29.861 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:29.861 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:32.396 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:32.397 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:32.397 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:24:32.397 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:32.397 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:32.397 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:32.397 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:32.397 [global] 00:24:32.397 thread=1 00:24:32.397 invalidate=1 00:24:32.397 rw=read 00:24:32.397 time_based=1 00:24:32.397 runtime=10 00:24:32.397 ioengine=libaio 00:24:32.397 direct=1 00:24:32.397 bs=262144 00:24:32.397 iodepth=64 00:24:32.397 norandommap=1 00:24:32.397 numjobs=1 00:24:32.397 00:24:32.397 [job0] 00:24:32.397 filename=/dev/nvme0n1 00:24:32.397 [job1] 00:24:32.397 filename=/dev/nvme10n1 00:24:32.397 [job2] 00:24:32.397 filename=/dev/nvme1n1 00:24:32.397 [job3] 00:24:32.397 filename=/dev/nvme2n1 00:24:32.397 [job4] 00:24:32.397 filename=/dev/nvme3n1 00:24:32.397 [job5] 00:24:32.397 filename=/dev/nvme4n1 00:24:32.397 [job6] 00:24:32.397 filename=/dev/nvme5n1 00:24:32.397 [job7] 00:24:32.397 filename=/dev/nvme6n1 00:24:32.397 [job8] 00:24:32.397 filename=/dev/nvme7n1 00:24:32.397 [job9] 00:24:32.397 filename=/dev/nvme8n1 00:24:32.397 [job10] 00:24:32.397 filename=/dev/nvme9n1 00:24:32.397 Could not set queue depth (nvme0n1) 00:24:32.397 Could not set queue depth (nvme10n1) 00:24:32.397 Could not set queue depth (nvme1n1) 00:24:32.397 Could not set queue depth (nvme2n1) 00:24:32.397 Could not set queue depth (nvme3n1) 00:24:32.397 Could not set queue depth (nvme4n1) 00:24:32.397 Could not set queue depth (nvme5n1) 00:24:32.397 Could not set queue depth (nvme6n1) 00:24:32.397 Could not set queue depth (nvme7n1) 00:24:32.397 Could not set queue depth (nvme8n1) 00:24:32.397 Could not set queue depth (nvme9n1) 00:24:32.397 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.397 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.397 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.397 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.397 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.397 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.397 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.397 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.397 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.397 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.397 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.397 fio-3.35 00:24:32.397 Starting 11 threads 00:24:44.607 00:24:44.607 job0: (groupid=0, jobs=1): err= 0: pid=93429: Thu Dec 5 09:56:30 2024 00:24:44.607 read: IOPS=334, BW=83.6MiB/s (87.7MB/s)(848MiB/10141msec) 00:24:44.607 slat (usec): min=20, max=128095, avg=2948.28, stdev=11809.15 00:24:44.607 clat (msec): min=25, max=350, avg=188.00, stdev=56.87 00:24:44.607 lat (msec): min=26, max=350, avg=190.95, stdev=58.50 00:24:44.607 clat percentiles (msec): 00:24:44.607 | 1.00th=[ 88], 5.00th=[ 97], 10.00th=[ 104], 20.00th=[ 118], 00:24:44.607 | 30.00th=[ 142], 40.00th=[ 199], 50.00th=[ 209], 60.00th=[ 218], 00:24:44.607 | 70.00th=[ 226], 80.00th=[ 232], 90.00th=[ 245], 95.00th=[ 259], 00:24:44.607 | 99.00th=[ 300], 99.50th=[ 326], 99.90th=[ 351], 99.95th=[ 351], 00:24:44.607 | 99.99th=[ 351] 00:24:44.607 bw ( KiB/s): min=64128, max=146432, per=13.16%, avg=85188.30, stdev=27561.59, samples=20 00:24:44.607 iops : min= 250, max= 572, avg=332.65, stdev=107.74, samples=20 00:24:44.607 lat (msec) : 50=0.80%, 100=8.05%, 250=84.02%, 500=7.13% 00:24:44.607 cpu : usr=0.20%, sys=1.28%, ctx=818, majf=0, minf=4097 00:24:44.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:24:44.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.607 issued rwts: total=3392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.607 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.607 job1: (groupid=0, jobs=1): err= 0: pid=93430: Thu Dec 5 09:56:30 2024 00:24:44.607 read: IOPS=142, BW=35.7MiB/s (37.4MB/s)(365MiB/10231msec) 00:24:44.607 slat (usec): min=17, max=427083, avg=6904.92, stdev=32455.84 00:24:44.607 clat (msec): min=11, max=874, avg=441.22, stdev=101.81 00:24:44.607 lat (msec): min=12, max=998, avg=448.12, stdev=107.43 00:24:44.607 clat percentiles (msec): 00:24:44.607 | 1.00th=[ 79], 5.00th=[ 338], 10.00th=[ 380], 20.00th=[ 409], 00:24:44.607 | 30.00th=[ 422], 40.00th=[ 435], 50.00th=[ 443], 60.00th=[ 460], 00:24:44.607 | 70.00th=[ 472], 80.00th=[ 498], 90.00th=[ 550], 95.00th=[ 584], 00:24:44.607 | 99.00th=[ 667], 99.50th=[ 701], 99.90th=[ 860], 99.95th=[ 877], 00:24:44.607 | 99.99th=[ 877] 00:24:44.607 bw ( KiB/s): min=16896, max=56945, per=5.51%, avg=35703.25, stdev=8051.88, samples=20 00:24:44.607 iops : min= 66, max= 222, avg=139.30, stdev=31.39, samples=20 00:24:44.607 lat (msec) : 20=0.62%, 100=3.77%, 250=0.07%, 500=78.27%, 750=17.07% 00:24:44.607 lat (msec) : 1000=0.21% 00:24:44.607 cpu : usr=0.03%, sys=0.68%, ctx=163, majf=0, minf=4098 00:24:44.607 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:24:44.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.607 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.607 issued rwts: total=1459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.607 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.607 job2: (groupid=0, jobs=1): err= 0: pid=93431: Thu Dec 5 09:56:30 2024 00:24:44.607 read: IOPS=328, BW=82.0MiB/s (86.0MB/s)(832MiB/10144msec) 00:24:44.607 slat (usec): min=20, max=88427, avg=3015.54, stdev=11000.19 00:24:44.607 clat (msec): min=23, max=334, avg=191.71, stdev=59.88 00:24:44.607 lat (msec): min=23, max=350, avg=194.73, stdev=61.42 00:24:44.607 clat percentiles (msec): 00:24:44.607 | 1.00th=[ 46], 5.00th=[ 100], 10.00th=[ 109], 20.00th=[ 122], 00:24:44.607 | 30.00th=[ 144], 40.00th=[ 197], 50.00th=[ 211], 60.00th=[ 224], 00:24:44.607 | 70.00th=[ 232], 80.00th=[ 243], 90.00th=[ 259], 95.00th=[ 268], 00:24:44.607 | 99.00th=[ 288], 99.50th=[ 300], 99.90th=[ 330], 99.95th=[ 330], 00:24:44.607 | 99.99th=[ 334] 00:24:44.607 bw ( KiB/s): min=63488, max=141824, per=12.90%, avg=83540.15, stdev=27903.74, samples=20 00:24:44.607 iops : min= 248, max= 554, avg=326.25, stdev=109.01, samples=20 00:24:44.607 lat (msec) : 50=1.32%, 100=3.94%, 250=80.14%, 500=14.60% 00:24:44.607 cpu : usr=0.16%, sys=1.28%, ctx=684, majf=0, minf=4097 00:24:44.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:24:44.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.607 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.607 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.607 job3: (groupid=0, jobs=1): err= 0: pid=93432: Thu Dec 5 09:56:30 2024 00:24:44.607 read: IOPS=320, BW=80.1MiB/s (84.0MB/s)(811MiB/10121msec) 00:24:44.607 slat (usec): min=20, max=127747, avg=3105.91, stdev=12628.65 00:24:44.607 clat (msec): min=27, max=391, avg=196.33, stdev=38.44 00:24:44.607 lat (msec): min=28, max=423, avg=199.44, stdev=40.04 00:24:44.607 clat percentiles (msec): 00:24:44.607 | 1.00th=[ 142], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 167], 00:24:44.607 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 197], 00:24:44.607 | 70.00th=[ 209], 80.00th=[ 215], 90.00th=[ 234], 95.00th=[ 262], 00:24:44.607 | 99.00th=[ 334], 99.50th=[ 372], 99.90th=[ 393], 99.95th=[ 393], 00:24:44.607 | 99.99th=[ 393] 00:24:44.607 bw ( KiB/s): min=40286, max=96768, per=12.56%, avg=81341.10, stdev=13425.38, samples=20 00:24:44.607 iops : min= 157, max= 378, avg=317.60, stdev=52.45, samples=20 00:24:44.607 lat (msec) : 50=0.06%, 100=0.28%, 250=93.28%, 500=6.38% 00:24:44.607 cpu : usr=0.10%, sys=1.18%, ctx=684, majf=0, minf=4097 00:24:44.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:24:44.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.608 issued rwts: total=3242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.608 job4: (groupid=0, jobs=1): err= 0: pid=93433: Thu Dec 5 09:56:30 2024 00:24:44.608 read: IOPS=147, BW=36.8MiB/s (38.6MB/s)(377MiB/10224msec) 00:24:44.608 slat (usec): min=20, max=309916, avg=6750.57, stdev=31088.39 00:24:44.608 clat (msec): min=47, max=725, avg=427.02, stdev=79.08 00:24:44.608 lat (msec): min=89, max=890, avg=433.77, stdev=84.99 00:24:44.608 clat percentiles (msec): 00:24:44.608 | 1.00th=[ 186], 5.00th=[ 243], 10.00th=[ 368], 20.00th=[ 384], 00:24:44.608 | 30.00th=[ 405], 40.00th=[ 422], 50.00th=[ 430], 60.00th=[ 443], 00:24:44.608 | 70.00th=[ 456], 80.00th=[ 477], 90.00th=[ 510], 95.00th=[ 550], 00:24:44.608 | 99.00th=[ 600], 99.50th=[ 609], 99.90th=[ 684], 99.95th=[ 726], 00:24:44.608 | 99.99th=[ 726] 00:24:44.608 bw ( KiB/s): min=30208, max=50075, per=5.70%, avg=36917.20, stdev=5865.40, samples=20 00:24:44.608 iops : min= 118, max= 195, avg=144.00, stdev=22.74, samples=20 00:24:44.608 lat (msec) : 50=0.07%, 100=0.27%, 250=5.71%, 500=80.94%, 750=13.01% 00:24:44.608 cpu : usr=0.06%, sys=0.67%, ctx=218, majf=0, minf=4097 00:24:44.608 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:24:44.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.608 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.608 issued rwts: total=1506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.608 job5: (groupid=0, jobs=1): err= 0: pid=93434: Thu Dec 5 09:56:30 2024 00:24:44.608 read: IOPS=246, BW=61.7MiB/s (64.7MB/s)(625MiB/10135msec) 00:24:44.608 slat (usec): min=21, max=318615, avg=3798.36, stdev=16408.18 00:24:44.608 clat (msec): min=89, max=651, avg=255.25, stdev=85.01 00:24:44.608 lat (msec): min=89, max=907, avg=259.05, stdev=87.53 00:24:44.608 clat percentiles (msec): 00:24:44.608 | 1.00th=[ 174], 5.00th=[ 190], 10.00th=[ 201], 20.00th=[ 207], 00:24:44.608 | 30.00th=[ 218], 40.00th=[ 224], 50.00th=[ 232], 60.00th=[ 241], 00:24:44.608 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 338], 95.00th=[ 489], 00:24:44.608 | 99.00th=[ 592], 99.50th=[ 592], 99.90th=[ 651], 99.95th=[ 651], 00:24:44.608 | 99.99th=[ 651] 00:24:44.608 bw ( KiB/s): min=32256, max=79006, per=9.64%, avg=62417.45, stdev=16096.58, samples=20 00:24:44.608 iops : min= 126, max= 308, avg=243.60, stdev=62.76, samples=20 00:24:44.608 lat (msec) : 100=0.16%, 250=69.09%, 500=27.15%, 750=3.60% 00:24:44.608 cpu : usr=0.09%, sys=1.06%, ctx=643, majf=0, minf=4097 00:24:44.608 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:24:44.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.608 issued rwts: total=2501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.608 job6: (groupid=0, jobs=1): err= 0: pid=93435: Thu Dec 5 09:56:30 2024 00:24:44.608 read: IOPS=127, BW=31.9MiB/s (33.5MB/s)(327MiB/10231msec) 00:24:44.608 slat (usec): min=20, max=413562, avg=7690.81, stdev=36420.70 00:24:44.608 clat (msec): min=17, max=1165, avg=492.29, stdev=259.71 00:24:44.608 lat (msec): min=17, max=1165, avg=499.98, stdev=263.83 00:24:44.608 clat percentiles (msec): 00:24:44.608 | 1.00th=[ 20], 5.00th=[ 32], 10.00th=[ 45], 20.00th=[ 405], 00:24:44.608 | 30.00th=[ 439], 40.00th=[ 451], 50.00th=[ 485], 60.00th=[ 498], 00:24:44.608 | 70.00th=[ 567], 80.00th=[ 709], 90.00th=[ 860], 95.00th=[ 944], 00:24:44.608 | 99.00th=[ 1083], 99.50th=[ 1133], 99.90th=[ 1133], 99.95th=[ 1167], 00:24:44.608 | 99.99th=[ 1167] 00:24:44.608 bw ( KiB/s): min=10240, max=109787, per=4.91%, avg=31820.35, stdev=20325.34, samples=20 00:24:44.608 iops : min= 40, max= 428, avg=124.10, stdev=79.26, samples=20 00:24:44.608 lat (msec) : 20=1.15%, 50=10.25%, 100=4.97%, 250=0.08%, 500=44.45% 00:24:44.608 lat (msec) : 750=22.19%, 1000=15.53%, 2000=1.38% 00:24:44.608 cpu : usr=0.04%, sys=0.52%, ctx=301, majf=0, minf=4097 00:24:44.608 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:24:44.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.608 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.608 issued rwts: total=1307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.608 job7: (groupid=0, jobs=1): err= 0: pid=93436: Thu Dec 5 09:56:30 2024 00:24:44.608 read: IOPS=309, BW=77.5MiB/s (81.2MB/s)(784MiB/10117msec) 00:24:44.608 slat (usec): min=20, max=200821, avg=3187.02, stdev=12790.60 00:24:44.608 clat (msec): min=63, max=439, avg=202.96, stdev=39.55 00:24:44.608 lat (msec): min=63, max=513, avg=206.15, stdev=41.08 00:24:44.608 clat percentiles (msec): 00:24:44.608 | 1.00th=[ 144], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 178], 00:24:44.608 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 199], 60.00th=[ 203], 00:24:44.608 | 70.00th=[ 211], 80.00th=[ 222], 90.00th=[ 245], 95.00th=[ 288], 00:24:44.608 | 99.00th=[ 355], 99.50th=[ 384], 99.90th=[ 414], 99.95th=[ 414], 00:24:44.608 | 99.99th=[ 439] 00:24:44.608 bw ( KiB/s): min=34816, max=98304, per=12.14%, avg=78609.95, stdev=12749.55, samples=20 00:24:44.608 iops : min= 136, max= 384, avg=306.85, stdev=49.81, samples=20 00:24:44.608 lat (msec) : 100=0.16%, 250=92.44%, 500=7.40% 00:24:44.608 cpu : usr=0.13%, sys=1.12%, ctx=649, majf=0, minf=4097 00:24:44.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:24:44.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.608 issued rwts: total=3135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.608 job8: (groupid=0, jobs=1): err= 0: pid=93437: Thu Dec 5 09:56:30 2024 00:24:44.608 read: IOPS=146, BW=36.7MiB/s (38.4MB/s)(375MiB/10227msec) 00:24:44.608 slat (usec): min=21, max=207097, avg=6757.73, stdev=26483.26 00:24:44.608 clat (msec): min=23, max=705, avg=428.84, stdev=91.03 00:24:44.608 lat (msec): min=25, max=816, avg=435.60, stdev=95.12 00:24:44.608 clat percentiles (msec): 00:24:44.608 | 1.00th=[ 83], 5.00th=[ 245], 10.00th=[ 368], 20.00th=[ 388], 00:24:44.608 | 30.00th=[ 397], 40.00th=[ 414], 50.00th=[ 422], 60.00th=[ 439], 00:24:44.608 | 70.00th=[ 460], 80.00th=[ 481], 90.00th=[ 527], 95.00th=[ 558], 00:24:44.608 | 99.00th=[ 667], 99.50th=[ 709], 99.90th=[ 709], 99.95th=[ 709], 00:24:44.608 | 99.99th=[ 709] 00:24:44.608 bw ( KiB/s): min=24064, max=46080, per=5.68%, avg=36761.00, stdev=5347.25, samples=20 00:24:44.608 iops : min= 94, max= 180, avg=143.45, stdev=20.81, samples=20 00:24:44.608 lat (msec) : 50=0.20%, 100=1.13%, 250=3.80%, 500=79.13%, 750=15.73% 00:24:44.608 cpu : usr=0.04%, sys=0.66%, ctx=246, majf=0, minf=4097 00:24:44.608 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:24:44.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.608 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.608 issued rwts: total=1500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.608 job9: (groupid=0, jobs=1): err= 0: pid=93438: Thu Dec 5 09:56:30 2024 00:24:44.608 read: IOPS=302, BW=75.5MiB/s (79.2MB/s)(765MiB/10123msec) 00:24:44.608 slat (usec): min=21, max=245327, avg=3267.31, stdev=13840.84 00:24:44.608 clat (msec): min=32, max=383, avg=208.09, stdev=37.85 00:24:44.608 lat (msec): min=33, max=586, avg=211.35, stdev=39.75 00:24:44.608 clat percentiles (msec): 00:24:44.608 | 1.00th=[ 146], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 182], 00:24:44.608 | 30.00th=[ 190], 40.00th=[ 197], 50.00th=[ 203], 60.00th=[ 209], 00:24:44.608 | 70.00th=[ 220], 80.00th=[ 230], 90.00th=[ 255], 95.00th=[ 288], 00:24:44.608 | 99.00th=[ 338], 99.50th=[ 355], 99.90th=[ 384], 99.95th=[ 384], 00:24:44.608 | 99.99th=[ 384] 00:24:44.608 bw ( KiB/s): min=48224, max=92672, per=11.84%, avg=76670.60, stdev=11083.84, samples=20 00:24:44.608 iops : min= 188, max= 362, avg=299.40, stdev=43.33, samples=20 00:24:44.608 lat (msec) : 50=0.20%, 100=0.13%, 250=89.38%, 500=10.30% 00:24:44.608 cpu : usr=0.14%, sys=1.22%, ctx=520, majf=0, minf=4097 00:24:44.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:24:44.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.608 issued rwts: total=3059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.608 job10: (groupid=0, jobs=1): err= 0: pid=93439: Thu Dec 5 09:56:30 2024 00:24:44.608 read: IOPS=141, BW=35.4MiB/s (37.2MB/s)(363MiB/10231msec) 00:24:44.608 slat (usec): min=21, max=308155, avg=6772.06, stdev=27655.76 00:24:44.608 clat (msec): min=24, max=774, avg=443.99, stdev=112.71 00:24:44.608 lat (msec): min=25, max=774, avg=450.76, stdev=115.91 00:24:44.608 clat percentiles (msec): 00:24:44.608 | 1.00th=[ 70], 5.00th=[ 249], 10.00th=[ 309], 20.00th=[ 401], 00:24:44.608 | 30.00th=[ 426], 40.00th=[ 439], 50.00th=[ 460], 60.00th=[ 477], 00:24:44.608 | 70.00th=[ 498], 80.00th=[ 514], 90.00th=[ 558], 95.00th=[ 600], 00:24:44.608 | 99.00th=[ 651], 99.50th=[ 651], 99.90th=[ 693], 99.95th=[ 776], 00:24:44.608 | 99.99th=[ 776] 00:24:44.608 bw ( KiB/s): min=21504, max=64512, per=5.48%, avg=35467.00, stdev=8929.70, samples=20 00:24:44.608 iops : min= 84, max= 252, avg=138.40, stdev=34.87, samples=20 00:24:44.608 lat (msec) : 50=0.28%, 100=3.31%, 250=2.83%, 500=65.24%, 750=28.28% 00:24:44.608 lat (msec) : 1000=0.07% 00:24:44.608 cpu : usr=0.02%, sys=0.59%, ctx=317, majf=0, minf=4097 00:24:44.608 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.7% 00:24:44.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.608 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.608 issued rwts: total=1450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.608 00:24:44.608 Run status group 0 (all jobs): 00:24:44.608 READ: bw=632MiB/s (663MB/s), 31.9MiB/s-83.6MiB/s (33.5MB/s-87.7MB/s), io=6470MiB (6784MB), run=10117-10231msec 00:24:44.608 00:24:44.608 Disk stats (read/write): 00:24:44.608 nvme0n1: ios=6657/0, merge=0/0, ticks=1231594/0, in_queue=1231594, util=97.63% 00:24:44.609 nvme10n1: ios=2790/0, merge=0/0, ticks=1217703/0, in_queue=1217703, util=97.95% 00:24:44.609 nvme1n1: ios=6528/0, merge=0/0, ticks=1239663/0, in_queue=1239663, util=97.99% 00:24:44.609 nvme2n1: ios=6357/0, merge=0/0, ticks=1237493/0, in_queue=1237493, util=98.01% 00:24:44.609 nvme3n1: ios=2885/0, merge=0/0, ticks=1241055/0, in_queue=1241055, util=98.07% 00:24:44.609 nvme4n1: ios=4875/0, merge=0/0, ticks=1237093/0, in_queue=1237093, util=98.36% 00:24:44.609 nvme5n1: ios=2486/0, merge=0/0, ticks=1218419/0, in_queue=1218419, util=98.44% 00:24:44.609 nvme6n1: ios=6142/0, merge=0/0, ticks=1235602/0, in_queue=1235602, util=98.45% 00:24:44.609 nvme7n1: ios=2872/0, merge=0/0, ticks=1236018/0, in_queue=1236018, util=98.87% 00:24:44.609 nvme8n1: ios=6073/0, merge=0/0, ticks=1246118/0, in_queue=1246118, util=99.01% 00:24:44.609 nvme9n1: ios=2772/0, merge=0/0, ticks=1227273/0, in_queue=1227273, util=99.04% 00:24:44.609 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:44.609 [global] 00:24:44.609 thread=1 00:24:44.609 invalidate=1 00:24:44.609 rw=randwrite 00:24:44.609 time_based=1 00:24:44.609 runtime=10 00:24:44.609 ioengine=libaio 00:24:44.609 direct=1 00:24:44.609 bs=262144 00:24:44.609 iodepth=64 00:24:44.609 norandommap=1 00:24:44.609 numjobs=1 00:24:44.609 00:24:44.609 [job0] 00:24:44.609 filename=/dev/nvme0n1 00:24:44.609 [job1] 00:24:44.609 filename=/dev/nvme10n1 00:24:44.609 [job2] 00:24:44.609 filename=/dev/nvme1n1 00:24:44.609 [job3] 00:24:44.609 filename=/dev/nvme2n1 00:24:44.609 [job4] 00:24:44.609 filename=/dev/nvme3n1 00:24:44.609 [job5] 00:24:44.609 filename=/dev/nvme4n1 00:24:44.609 [job6] 00:24:44.609 filename=/dev/nvme5n1 00:24:44.609 [job7] 00:24:44.609 filename=/dev/nvme6n1 00:24:44.609 [job8] 00:24:44.609 filename=/dev/nvme7n1 00:24:44.609 [job9] 00:24:44.609 filename=/dev/nvme8n1 00:24:44.609 [job10] 00:24:44.609 filename=/dev/nvme9n1 00:24:44.609 Could not set queue depth (nvme0n1) 00:24:44.609 Could not set queue depth (nvme10n1) 00:24:44.609 Could not set queue depth (nvme1n1) 00:24:44.609 Could not set queue depth (nvme2n1) 00:24:44.609 Could not set queue depth (nvme3n1) 00:24:44.609 Could not set queue depth (nvme4n1) 00:24:44.609 Could not set queue depth (nvme5n1) 00:24:44.609 Could not set queue depth (nvme6n1) 00:24:44.609 Could not set queue depth (nvme7n1) 00:24:44.609 Could not set queue depth (nvme8n1) 00:24:44.609 Could not set queue depth (nvme9n1) 00:24:44.609 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.609 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.609 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.609 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.609 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.609 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.609 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.609 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.609 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.609 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.609 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.609 fio-3.35 00:24:44.609 Starting 11 threads 00:24:54.646 00:24:54.646 job0: (groupid=0, jobs=1): err= 0: pid=93633: Thu Dec 5 09:56:41 2024 00:24:54.646 write: IOPS=256, BW=64.2MiB/s (67.3MB/s)(653MiB/10180msec); 0 zone resets 00:24:54.646 slat (usec): min=20, max=36062, avg=3823.63, stdev=6700.99 00:24:54.646 clat (msec): min=9, max=421, avg=245.38, stdev=27.02 00:24:54.646 lat (msec): min=9, max=421, avg=249.21, stdev=26.64 00:24:54.646 clat percentiles (msec): 00:24:54.646 | 1.00th=[ 114], 5.00th=[ 226], 10.00th=[ 232], 20.00th=[ 239], 00:24:54.646 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:24:54.646 | 70.00th=[ 253], 80.00th=[ 255], 90.00th=[ 259], 95.00th=[ 262], 00:24:54.646 | 99.00th=[ 313], 99.50th=[ 376], 99.90th=[ 405], 99.95th=[ 422], 00:24:54.646 | 99.99th=[ 422] 00:24:54.646 bw ( KiB/s): min=61440, max=68096, per=6.83%, avg=65254.00, stdev=1559.07, samples=20 00:24:54.646 iops : min= 240, max= 266, avg=254.80, stdev= 6.12, samples=20 00:24:54.646 lat (msec) : 10=0.15%, 50=0.31%, 100=0.46%, 250=56.60%, 500=42.48% 00:24:54.646 cpu : usr=0.50%, sys=0.72%, ctx=1927, majf=0, minf=1 00:24:54.646 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:24:54.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.646 issued rwts: total=0,2613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.646 job1: (groupid=0, jobs=1): err= 0: pid=93634: Thu Dec 5 09:56:41 2024 00:24:54.646 write: IOPS=347, BW=86.8MiB/s (91.0MB/s)(882MiB/10152msec); 0 zone resets 00:24:54.646 slat (usec): min=26, max=20176, avg=2820.17, stdev=4883.17 00:24:54.646 clat (msec): min=10, max=326, avg=181.36, stdev=23.07 00:24:54.646 lat (msec): min=10, max=326, avg=184.18, stdev=22.92 00:24:54.646 clat percentiles (msec): 00:24:54.646 | 1.00th=[ 63], 5.00th=[ 167], 10.00th=[ 167], 20.00th=[ 176], 00:24:54.646 | 30.00th=[ 178], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 182], 00:24:54.646 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 199], 95.00th=[ 203], 00:24:54.646 | 99.00th=[ 243], 99.50th=[ 284], 99.90th=[ 317], 99.95th=[ 326], 00:24:54.646 | 99.99th=[ 326] 00:24:54.646 bw ( KiB/s): min=79872, max=94208, per=9.28%, avg=88618.10, stdev=4645.49, samples=20 00:24:54.646 iops : min= 312, max= 368, avg=346.10, stdev=18.18, samples=20 00:24:54.646 lat (msec) : 20=0.45%, 50=0.34%, 100=0.57%, 250=97.90%, 500=0.74% 00:24:54.646 cpu : usr=0.86%, sys=1.20%, ctx=3947, majf=0, minf=1 00:24:54.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:24:54.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.646 issued rwts: total=0,3526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.646 job2: (groupid=0, jobs=1): err= 0: pid=93652: Thu Dec 5 09:56:41 2024 00:24:54.646 write: IOPS=257, BW=64.4MiB/s (67.5MB/s)(655MiB/10176msec); 0 zone resets 00:24:54.646 slat (usec): min=21, max=25489, avg=3814.90, stdev=6670.31 00:24:54.646 clat (msec): min=28, max=411, avg=244.65, stdev=27.13 00:24:54.646 lat (msec): min=28, max=411, avg=248.46, stdev=26.77 00:24:54.646 clat percentiles (msec): 00:24:54.646 | 1.00th=[ 94], 5.00th=[ 226], 10.00th=[ 232], 20.00th=[ 236], 00:24:54.646 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 251], 00:24:54.646 | 70.00th=[ 253], 80.00th=[ 255], 90.00th=[ 259], 95.00th=[ 262], 00:24:54.646 | 99.00th=[ 305], 99.50th=[ 368], 99.90th=[ 397], 99.95th=[ 414], 00:24:54.646 | 99.99th=[ 414] 00:24:54.646 bw ( KiB/s): min=61440, max=67584, per=6.85%, avg=65439.95, stdev=1712.08, samples=20 00:24:54.646 iops : min= 240, max= 264, avg=255.55, stdev= 6.75, samples=20 00:24:54.646 lat (msec) : 50=0.46%, 100=0.61%, 250=58.36%, 500=40.57% 00:24:54.646 cpu : usr=0.49%, sys=0.70%, ctx=2951, majf=0, minf=1 00:24:54.646 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:24:54.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.646 issued rwts: total=0,2620,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.646 job3: (groupid=0, jobs=1): err= 0: pid=93657: Thu Dec 5 09:56:41 2024 00:24:54.646 write: IOPS=346, BW=86.7MiB/s (90.9MB/s)(879MiB/10143msec); 0 zone resets 00:24:54.646 slat (usec): min=26, max=18243, avg=2834.80, stdev=4893.66 00:24:54.646 clat (msec): min=11, max=315, avg=181.66, stdev=20.48 00:24:54.646 lat (msec): min=11, max=315, avg=184.49, stdev=20.22 00:24:54.646 clat percentiles (msec): 00:24:54.646 | 1.00th=[ 92], 5.00th=[ 167], 10.00th=[ 167], 20.00th=[ 176], 00:24:54.646 | 30.00th=[ 178], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 182], 00:24:54.646 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 199], 95.00th=[ 203], 00:24:54.646 | 99.00th=[ 241], 99.50th=[ 271], 99.90th=[ 305], 99.95th=[ 317], 00:24:54.646 | 99.99th=[ 317] 00:24:54.646 bw ( KiB/s): min=79872, max=94208, per=9.25%, avg=88387.55, stdev=4392.68, samples=20 00:24:54.646 iops : min= 312, max= 368, avg=345.20, stdev=17.19, samples=20 00:24:54.646 lat (msec) : 20=0.11%, 50=0.34%, 100=0.57%, 250=98.35%, 500=0.63% 00:24:54.646 cpu : usr=0.93%, sys=0.90%, ctx=4386, majf=0, minf=1 00:24:54.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:24:54.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.646 issued rwts: total=0,3517,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.646 job4: (groupid=0, jobs=1): err= 0: pid=93658: Thu Dec 5 09:56:41 2024 00:24:54.646 write: IOPS=480, BW=120MiB/s (126MB/s)(1214MiB/10096msec); 0 zone resets 00:24:54.646 slat (usec): min=26, max=15553, avg=2055.09, stdev=3692.04 00:24:54.646 clat (msec): min=21, max=206, avg=130.95, stdev=37.78 00:24:54.646 lat (msec): min=21, max=206, avg=133.01, stdev=38.20 00:24:54.646 clat percentiles (msec): 00:24:54.646 | 1.00th=[ 100], 5.00th=[ 103], 10.00th=[ 104], 20.00th=[ 108], 00:24:54.646 | 30.00th=[ 109], 40.00th=[ 110], 50.00th=[ 111], 60.00th=[ 112], 00:24:54.646 | 70.00th=[ 116], 80.00th=[ 188], 90.00th=[ 197], 95.00th=[ 199], 00:24:54.646 | 99.00th=[ 203], 99.50th=[ 205], 99.90th=[ 207], 99.95th=[ 207], 00:24:54.646 | 99.99th=[ 207] 00:24:54.646 bw ( KiB/s): min=81756, max=151760, per=12.84%, avg=122653.85, stdev=32492.56, samples=20 00:24:54.646 iops : min= 319, max= 592, avg=479.00, stdev=126.89, samples=20 00:24:54.646 lat (msec) : 50=0.25%, 100=0.86%, 250=98.89% 00:24:54.646 cpu : usr=1.18%, sys=1.36%, ctx=6479, majf=0, minf=1 00:24:54.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:54.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.646 issued rwts: total=0,4856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.646 job5: (groupid=0, jobs=1): err= 0: pid=93659: Thu Dec 5 09:56:41 2024 00:24:54.646 write: IOPS=294, BW=73.6MiB/s (77.2MB/s)(748MiB/10152msec); 0 zone resets 00:24:54.646 slat (usec): min=17, max=31938, avg=3266.98, stdev=5852.05 00:24:54.647 clat (msec): min=38, max=389, avg=213.92, stdev=33.83 00:24:54.647 lat (msec): min=38, max=389, avg=217.18, stdev=33.97 00:24:54.647 clat percentiles (msec): 00:24:54.647 | 1.00th=[ 81], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 192], 00:24:54.647 | 30.00th=[ 194], 40.00th=[ 199], 50.00th=[ 222], 60.00th=[ 230], 00:24:54.647 | 70.00th=[ 236], 80.00th=[ 241], 90.00th=[ 245], 95.00th=[ 249], 00:24:54.647 | 99.00th=[ 288], 99.50th=[ 330], 99.90th=[ 376], 99.95th=[ 388], 00:24:54.647 | 99.99th=[ 388] 00:24:54.647 bw ( KiB/s): min=65536, max=95232, per=7.84%, avg=74915.50, stdev=8762.94, samples=20 00:24:54.647 iops : min= 256, max= 372, avg=292.60, stdev=34.22, samples=20 00:24:54.647 lat (msec) : 50=0.20%, 100=1.34%, 250=94.88%, 500=3.58% 00:24:54.647 cpu : usr=0.61%, sys=0.88%, ctx=3441, majf=0, minf=1 00:24:54.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:24:54.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.647 issued rwts: total=0,2990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.647 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.647 job6: (groupid=0, jobs=1): err= 0: pid=93660: Thu Dec 5 09:56:41 2024 00:24:54.647 write: IOPS=256, BW=64.2MiB/s (67.3MB/s)(653MiB/10178msec); 0 zone resets 00:24:54.647 slat (usec): min=17, max=38142, avg=3828.30, stdev=6701.48 00:24:54.647 clat (msec): min=26, max=407, avg=245.44, stdev=24.15 00:24:54.647 lat (msec): min=26, max=407, avg=249.27, stdev=23.63 00:24:54.647 clat percentiles (msec): 00:24:54.647 | 1.00th=[ 128], 5.00th=[ 226], 10.00th=[ 232], 20.00th=[ 236], 00:24:54.647 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:24:54.647 | 70.00th=[ 253], 80.00th=[ 255], 90.00th=[ 259], 95.00th=[ 262], 00:24:54.647 | 99.00th=[ 305], 99.50th=[ 363], 99.90th=[ 393], 99.95th=[ 409], 00:24:54.647 | 99.99th=[ 409] 00:24:54.647 bw ( KiB/s): min=61440, max=67584, per=6.83%, avg=65234.95, stdev=1541.66, samples=20 00:24:54.647 iops : min= 240, max= 264, avg=254.75, stdev= 6.05, samples=20 00:24:54.647 lat (msec) : 50=0.15%, 100=0.50%, 250=58.69%, 500=40.66% 00:24:54.647 cpu : usr=0.56%, sys=0.70%, ctx=2427, majf=0, minf=1 00:24:54.647 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:24:54.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.647 issued rwts: total=0,2612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.647 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.647 job7: (groupid=0, jobs=1): err= 0: pid=93661: Thu Dec 5 09:56:41 2024 00:24:54.647 write: IOPS=480, BW=120MiB/s (126MB/s)(1214MiB/10095msec); 0 zone resets 00:24:54.647 slat (usec): min=24, max=15347, avg=2048.93, stdev=3687.80 00:24:54.647 clat (msec): min=11, max=205, avg=130.98, stdev=37.83 00:24:54.647 lat (msec): min=11, max=206, avg=133.02, stdev=38.24 00:24:54.647 clat percentiles (msec): 00:24:54.647 | 1.00th=[ 100], 5.00th=[ 103], 10.00th=[ 104], 20.00th=[ 108], 00:24:54.647 | 30.00th=[ 109], 40.00th=[ 110], 50.00th=[ 111], 60.00th=[ 112], 00:24:54.647 | 70.00th=[ 116], 80.00th=[ 188], 90.00th=[ 197], 95.00th=[ 199], 00:24:54.647 | 99.00th=[ 203], 99.50th=[ 205], 99.90th=[ 207], 99.95th=[ 207], 00:24:54.647 | 99.99th=[ 207] 00:24:54.647 bw ( KiB/s): min=81756, max=151552, per=12.84%, avg=122628.70, stdev=32420.81, samples=20 00:24:54.647 iops : min= 319, max= 592, avg=478.90, stdev=126.62, samples=20 00:24:54.647 lat (msec) : 20=0.04%, 50=0.29%, 100=0.74%, 250=98.93% 00:24:54.647 cpu : usr=1.19%, sys=1.34%, ctx=6048, majf=0, minf=1 00:24:54.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:54.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.647 issued rwts: total=0,4855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.647 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.647 job8: (groupid=0, jobs=1): err= 0: pid=93662: Thu Dec 5 09:56:41 2024 00:24:54.647 write: IOPS=338, BW=84.7MiB/s (88.8MB/s)(862MiB/10186msec); 0 zone resets 00:24:54.647 slat (usec): min=18, max=19512, avg=2896.67, stdev=5244.95 00:24:54.647 clat (msec): min=11, max=414, avg=186.03, stdev=53.82 00:24:54.647 lat (msec): min=12, max=414, avg=188.92, stdev=54.40 00:24:54.647 clat percentiles (msec): 00:24:54.647 | 1.00th=[ 68], 5.00th=[ 133], 10.00th=[ 136], 20.00th=[ 142], 00:24:54.647 | 30.00th=[ 142], 40.00th=[ 144], 50.00th=[ 146], 60.00th=[ 228], 00:24:54.647 | 70.00th=[ 236], 80.00th=[ 241], 90.00th=[ 245], 95.00th=[ 249], 00:24:54.647 | 99.00th=[ 284], 99.50th=[ 355], 99.90th=[ 401], 99.95th=[ 414], 00:24:54.647 | 99.99th=[ 414] 00:24:54.647 bw ( KiB/s): min=65536, max=119022, per=9.07%, avg=86656.85, stdev=23474.53, samples=20 00:24:54.647 iops : min= 256, max= 464, avg=338.40, stdev=91.66, samples=20 00:24:54.647 lat (msec) : 20=0.35%, 50=0.35%, 100=0.93%, 250=94.84%, 500=3.54% 00:24:54.647 cpu : usr=0.74%, sys=0.86%, ctx=4415, majf=0, minf=1 00:24:54.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:24:54.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.647 issued rwts: total=0,3449,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.647 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.647 job9: (groupid=0, jobs=1): err= 0: pid=93663: Thu Dec 5 09:56:41 2024 00:24:54.647 write: IOPS=338, BW=84.5MiB/s (88.6MB/s)(860MiB/10176msec); 0 zone resets 00:24:54.647 slat (usec): min=18, max=19409, avg=2903.59, stdev=5258.11 00:24:54.647 clat (msec): min=12, max=400, avg=186.28, stdev=52.98 00:24:54.647 lat (msec): min=12, max=400, avg=189.18, stdev=53.55 00:24:54.647 clat percentiles (msec): 00:24:54.647 | 1.00th=[ 71], 5.00th=[ 133], 10.00th=[ 134], 20.00th=[ 142], 00:24:54.647 | 30.00th=[ 142], 40.00th=[ 144], 50.00th=[ 146], 60.00th=[ 228], 00:24:54.647 | 70.00th=[ 234], 80.00th=[ 241], 90.00th=[ 245], 95.00th=[ 249], 00:24:54.647 | 99.00th=[ 268], 99.50th=[ 342], 99.90th=[ 384], 99.95th=[ 401], 00:24:54.647 | 99.99th=[ 401] 00:24:54.647 bw ( KiB/s): min=65536, max=116736, per=9.05%, avg=86428.45, stdev=23219.24, samples=20 00:24:54.647 iops : min= 256, max= 456, avg=337.55, stdev=90.72, samples=20 00:24:54.647 lat (msec) : 20=0.12%, 50=0.46%, 100=0.81%, 250=95.09%, 500=3.52% 00:24:54.647 cpu : usr=0.63%, sys=1.03%, ctx=3905, majf=0, minf=1 00:24:54.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:24:54.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.647 issued rwts: total=0,3441,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.647 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.647 job10: (groupid=0, jobs=1): err= 0: pid=93664: Thu Dec 5 09:56:41 2024 00:24:54.647 write: IOPS=348, BW=87.2MiB/s (91.4MB/s)(883MiB/10125msec); 0 zone resets 00:24:54.647 slat (usec): min=17, max=58867, avg=2782.31, stdev=4934.33 00:24:54.647 clat (msec): min=54, max=295, avg=180.67, stdev=17.87 00:24:54.647 lat (msec): min=59, max=295, avg=183.45, stdev=17.59 00:24:54.647 clat percentiles (msec): 00:24:54.647 | 1.00th=[ 95], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 176], 00:24:54.647 | 30.00th=[ 178], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 182], 00:24:54.647 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 199], 95.00th=[ 201], 00:24:54.647 | 99.00th=[ 218], 99.50th=[ 251], 99.90th=[ 284], 99.95th=[ 296], 00:24:54.647 | 99.99th=[ 296] 00:24:54.647 bw ( KiB/s): min=75264, max=98304, per=9.29%, avg=88763.20, stdev=5485.32, samples=20 00:24:54.647 iops : min= 294, max= 384, avg=346.70, stdev=21.43, samples=20 00:24:54.647 lat (msec) : 100=1.10%, 250=98.39%, 500=0.51% 00:24:54.647 cpu : usr=0.88%, sys=1.03%, ctx=5021, majf=0, minf=1 00:24:54.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:24:54.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.647 issued rwts: total=0,3531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.647 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.647 00:24:54.647 Run status group 0 (all jobs): 00:24:54.647 WRITE: bw=933MiB/s (978MB/s), 64.2MiB/s-120MiB/s (67.3MB/s-126MB/s), io=9503MiB (9964MB), run=10095-10186msec 00:24:54.647 00:24:54.647 Disk stats (read/write): 00:24:54.647 nvme0n1: ios=49/5084, merge=0/0, ticks=54/1204594, in_queue=1204648, util=97.66% 00:24:54.647 nvme10n1: ios=49/6920, merge=0/0, ticks=142/1211448, in_queue=1211590, util=98.29% 00:24:54.647 nvme1n1: ios=34/5092, merge=0/0, ticks=55/1203854, in_queue=1203909, util=97.83% 00:24:54.647 nvme2n1: ios=28/6885, merge=0/0, ticks=102/1208388, in_queue=1208490, util=98.16% 00:24:54.647 nvme3n1: ios=18/9546, merge=0/0, ticks=147/1213002, in_queue=1213149, util=98.28% 00:24:54.647 nvme4n1: ios=0/5827, merge=0/0, ticks=0/1202720, in_queue=1202720, util=97.83% 00:24:54.647 nvme5n1: ios=0/5073, merge=0/0, ticks=0/1204191, in_queue=1204191, util=98.15% 00:24:54.647 nvme6n1: ios=0/9549, merge=0/0, ticks=0/1212927, in_queue=1212927, util=98.34% 00:24:54.647 nvme7n1: ios=0/6764, merge=0/0, ticks=0/1207851, in_queue=1207851, util=98.79% 00:24:54.647 nvme8n1: ios=0/6737, merge=0/0, ticks=0/1205365, in_queue=1205365, util=98.74% 00:24:54.647 nvme9n1: ios=0/6892, merge=0/0, ticks=0/1206523, in_queue=1206523, util=98.53% 00:24:54.647 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:54.647 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:54.647 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.647 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:54.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:54.647 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:54.647 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:54.647 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:24:54.647 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:54.647 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:54.647 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:24:54.647 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:54.648 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:54.648 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:54.648 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:54.648 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:54.648 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.648 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:54.648 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:54.648 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.648 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.649 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:54.907 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:54.907 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:54.907 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.908 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.908 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.908 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.908 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:55.167 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:55.167 rmmod nvme_tcp 00:24:55.167 rmmod nvme_fabrics 00:24:55.167 rmmod nvme_keyring 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 92950 ']' 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 92950 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 92950 ']' 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 92950 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92950 00:24:55.167 killing process with pid 92950 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92950' 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 92950 00:24:55.167 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 92950 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:24:58.452 00:24:58.452 real 0m53.509s 00:24:58.452 user 3m10.991s 00:24:58.452 sys 0m16.774s 00:24:58.452 ************************************ 00:24:58.452 END TEST nvmf_multiconnection 00:24:58.452 ************************************ 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:58.452 ************************************ 00:24:58.452 START TEST nvmf_initiator_timeout 00:24:58.452 ************************************ 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:58.452 * Looking for test storage... 00:24:58.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:24:58.452 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:58.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.453 --rc genhtml_branch_coverage=1 00:24:58.453 --rc genhtml_function_coverage=1 00:24:58.453 --rc genhtml_legend=1 00:24:58.453 --rc geninfo_all_blocks=1 00:24:58.453 --rc geninfo_unexecuted_blocks=1 00:24:58.453 00:24:58.453 ' 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:58.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.453 --rc genhtml_branch_coverage=1 00:24:58.453 --rc genhtml_function_coverage=1 00:24:58.453 --rc genhtml_legend=1 00:24:58.453 --rc geninfo_all_blocks=1 00:24:58.453 --rc geninfo_unexecuted_blocks=1 00:24:58.453 00:24:58.453 ' 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:58.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.453 --rc genhtml_branch_coverage=1 00:24:58.453 --rc genhtml_function_coverage=1 00:24:58.453 --rc genhtml_legend=1 00:24:58.453 --rc geninfo_all_blocks=1 00:24:58.453 --rc geninfo_unexecuted_blocks=1 00:24:58.453 00:24:58.453 ' 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:58.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.453 --rc genhtml_branch_coverage=1 00:24:58.453 --rc genhtml_function_coverage=1 00:24:58.453 --rc genhtml_legend=1 00:24:58.453 --rc geninfo_all_blocks=1 00:24:58.453 --rc geninfo_unexecuted_blocks=1 00:24:58.453 00:24:58.453 ' 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:58.453 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:58.453 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:58.454 Cannot find device "nvmf_init_br" 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:58.454 Cannot find device "nvmf_init_br2" 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:58.454 Cannot find device "nvmf_tgt_br" 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:24:58.454 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:58.454 Cannot find device "nvmf_tgt_br2" 00:24:58.454 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:24:58.454 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:58.454 Cannot find device "nvmf_init_br" 00:24:58.454 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:24:58.454 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:58.454 Cannot find device "nvmf_init_br2" 00:24:58.454 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:24:58.454 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:58.454 Cannot find device "nvmf_tgt_br" 00:24:58.454 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:24:58.454 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:58.454 Cannot find device "nvmf_tgt_br2" 00:24:58.454 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:24:58.454 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:58.454 Cannot find device "nvmf_br" 00:24:58.454 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:24:58.454 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:58.454 Cannot find device "nvmf_init_if" 00:24:58.454 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:24:58.454 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:58.712 Cannot find device "nvmf_init_if2" 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:58.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:58.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:58.712 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:58.970 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:58.970 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:24:58.970 00:24:58.970 --- 10.0.0.3 ping statistics --- 00:24:58.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.970 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:58.970 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:58.970 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:24:58.970 00:24:58.970 --- 10.0.0.4 ping statistics --- 00:24:58.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.970 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:58.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:24:58.970 00:24:58.970 --- 10.0.0.1 ping statistics --- 00:24:58.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.970 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:58.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:24:58.970 00:24:58.970 --- 10.0.0.2 ping statistics --- 00:24:58.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.970 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=94103 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 94103 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 94103 ']' 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.970 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:58.970 [2024-12-05 09:56:46.525345] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:24:58.970 [2024-12-05 09:56:46.525503] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.228 [2024-12-05 09:56:46.704705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:59.487 [2024-12-05 09:56:46.857899] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.487 [2024-12-05 09:56:46.858256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.487 [2024-12-05 09:56:46.858386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.487 [2024-12-05 09:56:46.858501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.487 [2024-12-05 09:56:46.858616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.487 [2024-12-05 09:56:46.861442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.487 [2024-12-05 09:56:46.861584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:59.487 [2024-12-05 09:56:46.862515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:59.487 [2024-12-05 09:56:46.862553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:00.055 Malloc0 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:00.055 Delay0 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:00.055 [2024-12-05 09:56:47.645636] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:00.055 [2024-12-05 09:56:47.677983] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.055 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:25:00.315 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:00.315 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:25:00.315 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:00.315 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:00.315 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:25:02.850 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:02.850 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:02.850 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:25:02.850 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:02.850 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:02.850 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:25:02.850 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=94181 00:25:02.850 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:02.850 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:02.850 [global] 00:25:02.850 thread=1 00:25:02.850 invalidate=1 00:25:02.850 rw=write 00:25:02.850 time_based=1 00:25:02.850 runtime=60 00:25:02.850 ioengine=libaio 00:25:02.850 direct=1 00:25:02.850 bs=4096 00:25:02.850 iodepth=1 00:25:02.850 norandommap=0 00:25:02.850 numjobs=1 00:25:02.850 00:25:02.850 verify_dump=1 00:25:02.850 verify_backlog=512 00:25:02.850 verify_state_save=0 00:25:02.850 do_verify=1 00:25:02.850 verify=crc32c-intel 00:25:02.850 [job0] 00:25:02.850 filename=/dev/nvme0n1 00:25:02.850 Could not set queue depth (nvme0n1) 00:25:02.850 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:02.850 fio-3.35 00:25:02.850 Starting 1 thread 00:25:05.385 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:05.385 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.385 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:05.385 true 00:25:05.385 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.385 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:05.385 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.385 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:05.385 true 00:25:05.385 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.385 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:05.385 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.385 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:05.385 true 00:25:05.385 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.385 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:05.385 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.385 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:05.385 true 00:25:05.385 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.385 09:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:08.673 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:08.673 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.673 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.673 true 00:25:08.673 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.673 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:08.673 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.673 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.673 true 00:25:08.673 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.673 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:08.673 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.673 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.673 true 00:25:08.673 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.673 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:08.673 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.673 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.673 true 00:25:08.673 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.673 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:08.673 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 94181 00:26:04.907 00:26:04.907 job0: (groupid=0, jobs=1): err= 0: pid=94206: Thu Dec 5 09:57:50 2024 00:26:04.907 read: IOPS=714, BW=2858KiB/s (2926kB/s)(167MiB/60000msec) 00:26:04.907 slat (usec): min=11, max=11836, avg=14.84, stdev=70.95 00:26:04.907 clat (usec): min=186, max=40484k, avg=1176.17, stdev=195541.88 00:26:04.907 lat (usec): min=198, max=40484k, avg=1191.01, stdev=195541.90 00:26:04.907 clat percentiles (usec): 00:26:04.907 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:26:04.907 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:26:04.907 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 269], 00:26:04.907 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 388], 99.95th=[ 529], 00:26:04.907 | 99.99th=[ 807] 00:26:04.907 write: IOPS=716, BW=2867KiB/s (2936kB/s)(168MiB/60000msec); 0 zone resets 00:26:04.907 slat (usec): min=15, max=767, avg=20.53, stdev= 7.44 00:26:04.907 clat (usec): min=137, max=1879, avg=184.62, stdev=25.15 00:26:04.907 lat (usec): min=164, max=1914, avg=205.15, stdev=26.91 00:26:04.907 clat percentiles (usec): 00:26:04.907 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 169], 00:26:04.907 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:26:04.907 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 210], 95.00th=[ 219], 00:26:04.907 | 99.00th=[ 245], 99.50th=[ 262], 99.90th=[ 404], 99.95th=[ 498], 00:26:04.907 | 99.99th=[ 775] 00:26:04.907 bw ( KiB/s): min= 4096, max= 9944, per=100.00%, avg=8611.90, stdev=1231.20, samples=39 00:26:04.907 iops : min= 1024, max= 2486, avg=2153.03, stdev=307.83, samples=39 00:26:04.907 lat (usec) : 250=90.75%, 500=9.20%, 750=0.04%, 1000=0.01% 00:26:04.907 lat (msec) : 2=0.01%, >=2000=0.01% 00:26:04.907 cpu : usr=0.46%, sys=1.87%, ctx=85881, majf=0, minf=5 00:26:04.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:04.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.907 issued rwts: total=42863,43008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:04.907 00:26:04.907 Run status group 0 (all jobs): 00:26:04.907 READ: bw=2858KiB/s (2926kB/s), 2858KiB/s-2858KiB/s (2926kB/s-2926kB/s), io=167MiB (176MB), run=60000-60000msec 00:26:04.907 WRITE: bw=2867KiB/s (2936kB/s), 2867KiB/s-2867KiB/s (2936kB/s-2936kB/s), io=168MiB (176MB), run=60000-60000msec 00:26:04.907 00:26:04.907 Disk stats (read/write): 00:26:04.907 nvme0n1: ios=42741/42952, merge=0/0, ticks=10503/8598, in_queue=19101, util=99.78% 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:04.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:04.907 nvmf hotplug test: fio successful as expected 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:04.907 rmmod nvme_tcp 00:26:04.907 rmmod nvme_fabrics 00:26:04.907 rmmod nvme_keyring 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 94103 ']' 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 94103 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 94103 ']' 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 94103 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94103 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:04.907 killing process with pid 94103 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94103' 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 94103 00:26:04.907 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 94103 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:26:04.907 00:26:04.907 real 1m5.952s 00:26:04.907 user 4m9.099s 00:26:04.907 sys 0m8.179s 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:04.907 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.907 ************************************ 00:26:04.907 END TEST nvmf_initiator_timeout 00:26:04.908 ************************************ 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:04.908 ************************************ 00:26:04.908 START TEST nvmf_nsid 00:26:04.908 ************************************ 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:04.908 * Looking for test storage... 00:26:04.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:04.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.908 --rc genhtml_branch_coverage=1 00:26:04.908 --rc genhtml_function_coverage=1 00:26:04.908 --rc genhtml_legend=1 00:26:04.908 --rc geninfo_all_blocks=1 00:26:04.908 --rc geninfo_unexecuted_blocks=1 00:26:04.908 00:26:04.908 ' 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:04.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.908 --rc genhtml_branch_coverage=1 00:26:04.908 --rc genhtml_function_coverage=1 00:26:04.908 --rc genhtml_legend=1 00:26:04.908 --rc geninfo_all_blocks=1 00:26:04.908 --rc geninfo_unexecuted_blocks=1 00:26:04.908 00:26:04.908 ' 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:04.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.908 --rc genhtml_branch_coverage=1 00:26:04.908 --rc genhtml_function_coverage=1 00:26:04.908 --rc genhtml_legend=1 00:26:04.908 --rc geninfo_all_blocks=1 00:26:04.908 --rc geninfo_unexecuted_blocks=1 00:26:04.908 00:26:04.908 ' 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:04.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.908 --rc genhtml_branch_coverage=1 00:26:04.908 --rc genhtml_function_coverage=1 00:26:04.908 --rc genhtml_legend=1 00:26:04.908 --rc geninfo_all_blocks=1 00:26:04.908 --rc geninfo_unexecuted_blocks=1 00:26:04.908 00:26:04.908 ' 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.908 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:04.908 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:04.909 Cannot find device "nvmf_init_br" 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:04.909 Cannot find device "nvmf_init_br2" 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:04.909 Cannot find device "nvmf_tgt_br" 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:04.909 Cannot find device "nvmf_tgt_br2" 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:04.909 Cannot find device "nvmf_init_br" 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:04.909 Cannot find device "nvmf_init_br2" 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:26:04.909 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:04.909 Cannot find device "nvmf_tgt_br" 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:04.909 Cannot find device "nvmf_tgt_br2" 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:04.909 Cannot find device "nvmf_br" 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:04.909 Cannot find device "nvmf_init_if" 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:04.909 Cannot find device "nvmf_init_if2" 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:04.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:04.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:04.909 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:04.909 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:26:04.909 00:26:04.909 --- 10.0.0.3 ping statistics --- 00:26:04.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.909 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:26:04.909 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:04.910 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:04.910 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:26:04.910 00:26:04.910 --- 10.0.0.4 ping statistics --- 00:26:04.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.910 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:04.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:26:04.910 00:26:04.910 --- 10.0.0.1 ping statistics --- 00:26:04.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.910 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:04.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:26:04.910 00:26:04.910 --- 10.0.0.2 ping statistics --- 00:26:04.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.910 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=95061 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 95061 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 95061 ']' 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:04.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:04.910 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:04.910 [2024-12-05 09:57:52.458120] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:04.910 [2024-12-05 09:57:52.458306] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.169 [2024-12-05 09:57:52.654911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.437 [2024-12-05 09:57:52.803658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:05.437 [2024-12-05 09:57:52.803763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:05.437 [2024-12-05 09:57:52.803796] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:05.437 [2024-12-05 09:57:52.803839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:05.437 [2024-12-05 09:57:52.803863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:05.437 [2024-12-05 09:57:52.805495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=95111 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=ad61a2bb-bcba-4e9b-9152-3bdcf1820582 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=1f2df04b-ab6a-4a04-95d6-b5de7afec095 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=6a838b56-74c8-44af-a4e1-f36c4f175511 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:06.005 null0 00:26:06.005 null1 00:26:06.005 null2 00:26:06.005 [2024-12-05 09:57:53.545077] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.005 [2024-12-05 09:57:53.569327] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:06.005 [2024-12-05 09:57:53.600924] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:06.005 [2024-12-05 09:57:53.601093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95111 ] 00:26:06.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 95111 /var/tmp/tgt2.sock 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 95111 ']' 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.005 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:06.264 [2024-12-05 09:57:53.771974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.264 [2024-12-05 09:57:53.877628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.201 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:07.201 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:26:07.201 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:26:07.460 [2024-12-05 09:57:55.033594] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.460 [2024-12-05 09:57:55.049769] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:26:07.460 nvme0n1 nvme0n2 00:26:07.460 nvme1n1 00:26:07.727 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:26:07.727 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:26:07.727 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:26:07.727 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:26:07.727 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:26:07.727 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:26:07.727 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:26:07.727 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:26:07.727 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:26:07.727 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:26:07.727 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:07.727 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:07.727 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:26:07.727 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:26:07.727 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:26:07.727 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:26:08.661 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:08.661 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:26:08.661 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:08.661 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:26:08.661 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:08.661 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid ad61a2bb-bcba-4e9b-9152-3bdcf1820582 00:26:08.661 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:08.661 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:26:08.661 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:26:08.661 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:26:08.661 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:08.919 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ad61a2bbbcba4e9b91523bdcf1820582 00:26:08.919 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AD61A2BBBCBA4E9B91523BDCF1820582 00:26:08.919 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ AD61A2BBBCBA4E9B91523BDCF1820582 == \A\D\6\1\A\2\B\B\B\C\B\A\4\E\9\B\9\1\5\2\3\B\D\C\F\1\8\2\0\5\8\2 ]] 00:26:08.919 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:26:08.919 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:08.919 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:08.919 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:26:08.919 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:08.919 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:26:08.919 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:08.919 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 1f2df04b-ab6a-4a04-95d6-b5de7afec095 00:26:08.919 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:08.919 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:26:08.919 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1f2df04bab6a4a0495d6b5de7afec095 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1F2DF04BAB6A4A0495D6B5DE7AFEC095 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 1F2DF04BAB6A4A0495D6B5DE7AFEC095 == \1\F\2\D\F\0\4\B\A\B\6\A\4\A\0\4\9\5\D\6\B\5\D\E\7\A\F\E\C\0\9\5 ]] 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 6a838b56-74c8-44af-a4e1-f36c4f175511 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6a838b5674c844afa4e1f36c4f175511 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6A838B5674C844AFA4E1F36C4F175511 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 6A838B5674C844AFA4E1F36C4F175511 == \6\A\8\3\8\B\5\6\7\4\C\8\4\4\A\F\A\4\E\1\F\3\6\C\4\F\1\7\5\5\1\1 ]] 00:26:08.920 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:26:09.179 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:26:09.179 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:26:09.179 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 95111 00:26:09.179 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 95111 ']' 00:26:09.179 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 95111 00:26:09.179 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:26:09.179 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.179 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95111 00:26:09.179 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:09.179 killing process with pid 95111 00:26:09.179 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:09.179 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95111' 00:26:09.179 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 95111 00:26:09.179 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 95111 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:11.082 rmmod nvme_tcp 00:26:11.082 rmmod nvme_fabrics 00:26:11.082 rmmod nvme_keyring 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 95061 ']' 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 95061 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 95061 ']' 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 95061 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95061 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95061' 00:26:11.082 killing process with pid 95061 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 95061 00:26:11.082 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 95061 00:26:12.019 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:12.019 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:12.019 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:12.019 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:26:12.019 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:26:12.019 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:12.019 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:26:12.019 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:12.019 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:12.019 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:12.019 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:12.278 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:12.278 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:12.278 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:12.278 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:12.278 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:12.278 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:12.278 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:12.278 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:12.278 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:12.278 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:12.278 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:12.278 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:12.278 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.278 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.278 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.278 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:26:12.278 00:26:12.278 real 0m8.158s 00:26:12.278 user 0m12.679s 00:26:12.278 sys 0m1.668s 00:26:12.278 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:12.278 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:12.278 ************************************ 00:26:12.278 END TEST nvmf_nsid 00:26:12.278 ************************************ 00:26:12.536 09:57:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:12.536 00:26:12.536 real 14m52.594s 00:26:12.536 user 44m42.992s 00:26:12.536 sys 2m20.867s 00:26:12.536 09:57:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:12.536 ************************************ 00:26:12.536 09:57:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:12.536 END TEST nvmf_target_extra 00:26:12.536 ************************************ 00:26:12.536 09:57:59 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:12.536 09:57:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:12.536 09:57:59 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:12.536 09:57:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:12.536 ************************************ 00:26:12.536 START TEST nvmf_host 00:26:12.536 ************************************ 00:26:12.536 09:57:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:12.536 * Looking for test storage... 00:26:12.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:26:12.536 09:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:12.536 09:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:26:12.536 09:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:12.536 09:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:12.536 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:12.536 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:12.536 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:12.536 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:12.537 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:12.537 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:12.537 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:12.537 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:12.537 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:12.537 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:12.537 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:12.537 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:26:12.537 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:26:12.537 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:12.537 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:12.537 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:26:12.537 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:26:12.537 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:12.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.798 --rc genhtml_branch_coverage=1 00:26:12.798 --rc genhtml_function_coverage=1 00:26:12.798 --rc genhtml_legend=1 00:26:12.798 --rc geninfo_all_blocks=1 00:26:12.798 --rc geninfo_unexecuted_blocks=1 00:26:12.798 00:26:12.798 ' 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:12.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.798 --rc genhtml_branch_coverage=1 00:26:12.798 --rc genhtml_function_coverage=1 00:26:12.798 --rc genhtml_legend=1 00:26:12.798 --rc geninfo_all_blocks=1 00:26:12.798 --rc geninfo_unexecuted_blocks=1 00:26:12.798 00:26:12.798 ' 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:12.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.798 --rc genhtml_branch_coverage=1 00:26:12.798 --rc genhtml_function_coverage=1 00:26:12.798 --rc genhtml_legend=1 00:26:12.798 --rc geninfo_all_blocks=1 00:26:12.798 --rc geninfo_unexecuted_blocks=1 00:26:12.798 00:26:12.798 ' 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:12.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.798 --rc genhtml_branch_coverage=1 00:26:12.798 --rc genhtml_function_coverage=1 00:26:12.798 --rc genhtml_legend=1 00:26:12.798 --rc geninfo_all_blocks=1 00:26:12.798 --rc geninfo_unexecuted_blocks=1 00:26:12.798 00:26:12.798 ' 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:26:12.798 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:12.799 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.799 ************************************ 00:26:12.799 START TEST nvmf_multicontroller 00:26:12.799 ************************************ 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:12.799 * Looking for test storage... 00:26:12.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:12.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.799 --rc genhtml_branch_coverage=1 00:26:12.799 --rc genhtml_function_coverage=1 00:26:12.799 --rc genhtml_legend=1 00:26:12.799 --rc geninfo_all_blocks=1 00:26:12.799 --rc geninfo_unexecuted_blocks=1 00:26:12.799 00:26:12.799 ' 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:12.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.799 --rc genhtml_branch_coverage=1 00:26:12.799 --rc genhtml_function_coverage=1 00:26:12.799 --rc genhtml_legend=1 00:26:12.799 --rc geninfo_all_blocks=1 00:26:12.799 --rc geninfo_unexecuted_blocks=1 00:26:12.799 00:26:12.799 ' 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:12.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.799 --rc genhtml_branch_coverage=1 00:26:12.799 --rc genhtml_function_coverage=1 00:26:12.799 --rc genhtml_legend=1 00:26:12.799 --rc geninfo_all_blocks=1 00:26:12.799 --rc geninfo_unexecuted_blocks=1 00:26:12.799 00:26:12.799 ' 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:12.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.799 --rc genhtml_branch_coverage=1 00:26:12.799 --rc genhtml_function_coverage=1 00:26:12.799 --rc genhtml_legend=1 00:26:12.799 --rc geninfo_all_blocks=1 00:26:12.799 --rc geninfo_unexecuted_blocks=1 00:26:12.799 00:26:12.799 ' 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.799 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:12.800 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:13.071 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:13.071 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:13.072 Cannot find device "nvmf_init_br" 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:13.072 Cannot find device "nvmf_init_br2" 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:13.072 Cannot find device "nvmf_tgt_br" 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:13.072 Cannot find device "nvmf_tgt_br2" 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:13.072 Cannot find device "nvmf_init_br" 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:13.072 Cannot find device "nvmf_init_br2" 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:13.072 Cannot find device "nvmf_tgt_br" 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:13.072 Cannot find device "nvmf_tgt_br2" 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:13.072 Cannot find device "nvmf_br" 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:13.072 Cannot find device "nvmf_init_if" 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:13.072 Cannot find device "nvmf_init_if2" 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:13.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:13.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:13.072 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:13.341 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:13.341 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:26:13.341 00:26:13.341 --- 10.0.0.3 ping statistics --- 00:26:13.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.341 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:13.341 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:13.341 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:26:13.341 00:26:13.341 --- 10.0.0.4 ping statistics --- 00:26:13.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.341 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:13.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:26:13.341 00:26:13.341 --- 10.0.0.1 ping statistics --- 00:26:13.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.341 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:13.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:26:13.341 00:26:13.341 --- 10.0.0.2 ping statistics --- 00:26:13.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.341 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@461 -- # return 0 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=95509 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 95509 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 95509 ']' 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.341 09:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:13.341 [2024-12-05 09:58:00.937511] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:13.342 [2024-12-05 09:58:00.937629] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.600 [2024-12-05 09:58:01.104105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:13.600 [2024-12-05 09:58:01.210046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.600 [2024-12-05 09:58:01.210101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.600 [2024-12-05 09:58:01.210119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.600 [2024-12-05 09:58:01.210130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.600 [2024-12-05 09:58:01.210171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.600 [2024-12-05 09:58:01.212120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.600 [2024-12-05 09:58:01.212238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.600 [2024-12-05 09:58:01.212261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:14.535 09:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:14.535 09:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:26:14.535 09:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:14.535 09:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:14.535 09:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:14.535 09:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.535 09:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:14.535 09:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.535 09:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:14.535 [2024-12-05 09:58:01.938196] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.535 09:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.535 09:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:14.535 09:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.535 09:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:14.535 Malloc0 00:26:14.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:14.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:14.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:14.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:14.536 [2024-12-05 09:58:02.043858] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:14.536 [2024-12-05 09:58:02.051738] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:14.536 Malloc1 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:14.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.795 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=95561 00:26:14.795 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:14.795 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:14.795 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 95561 /var/tmp/bdevperf.sock 00:26:14.795 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 95561 ']' 00:26:14.795 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:14.795 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:14.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:14.795 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:14.795 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:14.795 09:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.733 NVMe0n1 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.733 1 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.733 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.993 2024/12/05 09:58:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:26:15.993 request: 00:26:15.993 { 00:26:15.993 "method": "bdev_nvme_attach_controller", 00:26:15.993 "params": { 00:26:15.993 "name": "NVMe0", 00:26:15.993 "trtype": "tcp", 00:26:15.993 "traddr": "10.0.0.3", 00:26:15.993 "adrfam": "ipv4", 00:26:15.993 "trsvcid": "4420", 00:26:15.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.993 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:15.993 "hostaddr": "10.0.0.1", 00:26:15.993 "prchk_reftag": false, 00:26:15.993 "prchk_guard": false, 00:26:15.993 "hdgst": false, 00:26:15.993 "ddgst": false, 00:26:15.993 "allow_unrecognized_csi": false 00:26:15.993 } 00:26:15.993 } 00:26:15.993 Got JSON-RPC error response 00:26:15.993 GoRPCClient: error on JSON-RPC call 00:26:15.993 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:15.993 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:15.993 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:15.993 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:15.993 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:15.993 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:15.993 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:15.993 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:15.993 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:15.993 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:15.993 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:15.993 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:15.993 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:15.993 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.993 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.993 2024/12/05 09:58:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:26:15.993 request: 00:26:15.993 { 00:26:15.993 "method": "bdev_nvme_attach_controller", 00:26:15.993 "params": { 00:26:15.994 "name": "NVMe0", 00:26:15.994 "trtype": "tcp", 00:26:15.994 "traddr": "10.0.0.3", 00:26:15.994 "adrfam": "ipv4", 00:26:15.994 "trsvcid": "4420", 00:26:15.994 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:15.994 "hostaddr": "10.0.0.1", 00:26:15.994 "prchk_reftag": false, 00:26:15.994 "prchk_guard": false, 00:26:15.994 "hdgst": false, 00:26:15.994 "ddgst": false, 00:26:15.994 "allow_unrecognized_csi": false 00:26:15.994 } 00:26:15.994 } 00:26:15.994 Got JSON-RPC error response 00:26:15.994 GoRPCClient: error on JSON-RPC call 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.994 2024/12/05 09:58:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:26:15.994 request: 00:26:15.994 { 00:26:15.994 "method": "bdev_nvme_attach_controller", 00:26:15.994 "params": { 00:26:15.994 "name": "NVMe0", 00:26:15.994 "trtype": "tcp", 00:26:15.994 "traddr": "10.0.0.3", 00:26:15.994 "adrfam": "ipv4", 00:26:15.994 "trsvcid": "4420", 00:26:15.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.994 "hostaddr": "10.0.0.1", 00:26:15.994 "prchk_reftag": false, 00:26:15.994 "prchk_guard": false, 00:26:15.994 "hdgst": false, 00:26:15.994 "ddgst": false, 00:26:15.994 "multipath": "disable", 00:26:15.994 "allow_unrecognized_csi": false 00:26:15.994 } 00:26:15.994 } 00:26:15.994 Got JSON-RPC error response 00:26:15.994 GoRPCClient: error on JSON-RPC call 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.994 2024/12/05 09:58:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:26:15.994 request: 00:26:15.994 { 00:26:15.994 "method": "bdev_nvme_attach_controller", 00:26:15.994 "params": { 00:26:15.994 "name": "NVMe0", 00:26:15.994 "trtype": "tcp", 00:26:15.994 "traddr": "10.0.0.3", 00:26:15.994 "adrfam": "ipv4", 00:26:15.994 "trsvcid": "4420", 00:26:15.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.994 "hostaddr": "10.0.0.1", 00:26:15.994 "prchk_reftag": false, 00:26:15.994 "prchk_guard": false, 00:26:15.994 "hdgst": false, 00:26:15.994 "ddgst": false, 00:26:15.994 "multipath": "failover", 00:26:15.994 "allow_unrecognized_csi": false 00:26:15.994 } 00:26:15.994 } 00:26:15.994 Got JSON-RPC error response 00:26:15.994 GoRPCClient: error on JSON-RPC call 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.994 NVMe0n1 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.994 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:15.994 09:58:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:17.373 { 00:26:17.373 "results": [ 00:26:17.373 { 00:26:17.373 "job": "NVMe0n1", 00:26:17.373 "core_mask": "0x1", 00:26:17.373 "workload": "write", 00:26:17.373 "status": "finished", 00:26:17.373 "queue_depth": 128, 00:26:17.373 "io_size": 4096, 00:26:17.373 "runtime": 1.008946, 00:26:17.373 "iops": 18172.429446174523, 00:26:17.373 "mibps": 70.98605252411923, 00:26:17.373 "io_failed": 0, 00:26:17.373 "io_timeout": 0, 00:26:17.373 "avg_latency_us": 7023.583632694549, 00:26:17.373 "min_latency_us": 3902.370909090909, 00:26:17.373 "max_latency_us": 14477.498181818182 00:26:17.373 } 00:26:17.373 ], 00:26:17.373 "core_count": 1 00:26:17.373 } 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:17.373 nvme1n1 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:17.373 nvme1n1 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:17.373 09:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.644 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:26:17.644 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 95561 00:26:17.644 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 95561 ']' 00:26:17.644 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 95561 00:26:17.644 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:26:17.644 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:17.644 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95561 00:26:17.644 killing process with pid 95561 00:26:17.644 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:17.644 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:17.644 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95561' 00:26:17.644 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 95561 00:26:17.644 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 95561 00:26:18.581 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:18.581 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.581 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:18.581 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.581 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:18.581 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.581 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:18.581 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.581 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:26:18.581 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:18.581 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:26:18.581 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:26:18.581 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:26:18.581 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:26:18.581 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:26:18.581 [2024-12-05 09:58:02.286111] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:18.581 [2024-12-05 09:58:02.286343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95561 ] 00:26:18.581 [2024-12-05 09:58:02.468789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.581 [2024-12-05 09:58:02.584989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.581 [2024-12-05 09:58:03.573718] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name a7840d57-697a-4c33-8fb9-61d272fb8229 already exists 00:26:18.581 [2024-12-05 09:58:03.573805] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:a7840d57-697a-4c33-8fb9-61d272fb8229 alias for bdev NVMe1n1 00:26:18.581 [2024-12-05 09:58:03.573839] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:18.581 Running I/O for 1 seconds... 00:26:18.581 18142.00 IOPS, 70.87 MiB/s 00:26:18.581 Latency(us) 00:26:18.581 [2024-12-05T09:58:06.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.581 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:18.581 NVMe0n1 : 1.01 18172.43 70.99 0.00 0.00 7023.58 3902.37 14477.50 00:26:18.581 [2024-12-05T09:58:06.210Z] =================================================================================================================== 00:26:18.581 [2024-12-05T09:58:06.210Z] Total : 18172.43 70.99 0.00 0.00 7023.58 3902.37 14477.50 00:26:18.581 Received shutdown signal, test time was about 1.000000 seconds 00:26:18.581 00:26:18.581 Latency(us) 00:26:18.581 [2024-12-05T09:58:06.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.581 [2024-12-05T09:58:06.210Z] =================================================================================================================== 00:26:18.581 [2024-12-05T09:58:06.210Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:18.581 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:26:18.581 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:18.581 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:26:18.581 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:26:18.581 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:18.581 09:58:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:26:18.581 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:18.581 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:26:18.581 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:18.581 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:18.581 rmmod nvme_tcp 00:26:18.581 rmmod nvme_fabrics 00:26:18.581 rmmod nvme_keyring 00:26:18.581 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:18.581 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:26:18.581 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:26:18.581 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 95509 ']' 00:26:18.581 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 95509 00:26:18.581 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 95509 ']' 00:26:18.581 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 95509 00:26:18.581 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:26:18.581 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:18.581 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95509 00:26:18.581 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:18.581 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:18.581 killing process with pid 95509 00:26:18.581 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95509' 00:26:18.581 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 95509 00:26:18.581 09:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 95509 00:26:19.955 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:26:19.956 00:26:19.956 real 0m7.313s 00:26:19.956 user 0m21.977s 00:26:19.956 sys 0m1.552s 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:19.956 ************************************ 00:26:19.956 END TEST nvmf_multicontroller 00:26:19.956 ************************************ 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.956 ************************************ 00:26:19.956 START TEST nvmf_aer 00:26:19.956 ************************************ 00:26:19.956 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:20.214 * Looking for test storage... 00:26:20.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:20.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.214 --rc genhtml_branch_coverage=1 00:26:20.214 --rc genhtml_function_coverage=1 00:26:20.214 --rc genhtml_legend=1 00:26:20.214 --rc geninfo_all_blocks=1 00:26:20.214 --rc geninfo_unexecuted_blocks=1 00:26:20.214 00:26:20.214 ' 00:26:20.214 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:20.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.215 --rc genhtml_branch_coverage=1 00:26:20.215 --rc genhtml_function_coverage=1 00:26:20.215 --rc genhtml_legend=1 00:26:20.215 --rc geninfo_all_blocks=1 00:26:20.215 --rc geninfo_unexecuted_blocks=1 00:26:20.215 00:26:20.215 ' 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:20.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.215 --rc genhtml_branch_coverage=1 00:26:20.215 --rc genhtml_function_coverage=1 00:26:20.215 --rc genhtml_legend=1 00:26:20.215 --rc geninfo_all_blocks=1 00:26:20.215 --rc geninfo_unexecuted_blocks=1 00:26:20.215 00:26:20.215 ' 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:20.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.215 --rc genhtml_branch_coverage=1 00:26:20.215 --rc genhtml_function_coverage=1 00:26:20.215 --rc genhtml_legend=1 00:26:20.215 --rc geninfo_all_blocks=1 00:26:20.215 --rc geninfo_unexecuted_blocks=1 00:26:20.215 00:26:20.215 ' 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:20.215 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:20.215 Cannot find device "nvmf_init_br" 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:20.215 Cannot find device "nvmf_init_br2" 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:20.215 Cannot find device "nvmf_tgt_br" 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:20.215 Cannot find device "nvmf_tgt_br2" 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:26:20.215 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:20.474 Cannot find device "nvmf_init_br" 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:20.474 Cannot find device "nvmf_init_br2" 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:20.474 Cannot find device "nvmf_tgt_br" 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:20.474 Cannot find device "nvmf_tgt_br2" 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:20.474 Cannot find device "nvmf_br" 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:20.474 Cannot find device "nvmf_init_if" 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:20.474 Cannot find device "nvmf_init_if2" 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:20.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:20.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:20.474 09:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:20.474 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:20.474 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:20.474 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:20.474 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:20.474 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:20.474 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:20.474 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:20.474 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:20.474 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:20.474 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:20.474 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:20.474 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:20.474 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:20.474 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:20.474 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:20.474 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:20.732 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:20.732 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:20.732 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:20.732 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:20.732 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:20.732 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:20.732 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:20.732 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:20.732 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:20.732 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:20.732 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:20.733 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:26:20.733 00:26:20.733 --- 10.0.0.3 ping statistics --- 00:26:20.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.733 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:20.733 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:20.733 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.111 ms 00:26:20.733 00:26:20.733 --- 10.0.0.4 ping statistics --- 00:26:20.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.733 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:20.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:20.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:26:20.733 00:26:20.733 --- 10.0.0.1 ping statistics --- 00:26:20.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.733 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:20.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:20.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:26:20.733 00:26:20.733 --- 10.0.0.2 ping statistics --- 00:26:20.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.733 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@461 -- # return 0 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=95888 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 95888 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 95888 ']' 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.733 09:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.991 [2024-12-05 09:58:08.360128] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:20.991 [2024-12-05 09:58:08.360302] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.991 [2024-12-05 09:58:08.556972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:21.249 [2024-12-05 09:58:08.718808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.249 [2024-12-05 09:58:08.718888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.249 [2024-12-05 09:58:08.718913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.249 [2024-12-05 09:58:08.718946] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.249 [2024-12-05 09:58:08.718981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.249 [2024-12-05 09:58:08.721597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.249 [2024-12-05 09:58:08.721793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:21.249 [2024-12-05 09:58:08.721955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.249 [2024-12-05 09:58:08.722582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:21.815 [2024-12-05 09:58:09.318550] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:21.815 Malloc0 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:21.815 [2024-12-05 09:58:09.428426] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.815 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:21.815 [ 00:26:21.815 { 00:26:21.815 "allow_any_host": true, 00:26:21.815 "hosts": [], 00:26:21.815 "listen_addresses": [], 00:26:21.815 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:21.815 "subtype": "Discovery" 00:26:21.815 }, 00:26:21.815 { 00:26:21.815 "allow_any_host": true, 00:26:21.815 "hosts": [], 00:26:21.815 "listen_addresses": [ 00:26:21.815 { 00:26:21.815 "adrfam": "IPv4", 00:26:22.074 "traddr": "10.0.0.3", 00:26:22.074 "trsvcid": "4420", 00:26:22.074 "trtype": "TCP" 00:26:22.074 } 00:26:22.074 ], 00:26:22.074 "max_cntlid": 65519, 00:26:22.074 "max_namespaces": 2, 00:26:22.074 "min_cntlid": 1, 00:26:22.074 "model_number": "SPDK bdev Controller", 00:26:22.074 "namespaces": [ 00:26:22.074 { 00:26:22.074 "bdev_name": "Malloc0", 00:26:22.074 "name": "Malloc0", 00:26:22.074 "nguid": "DA996FC03D60402A88D90991939CAEC2", 00:26:22.074 "nsid": 1, 00:26:22.074 "uuid": "da996fc0-3d60-402a-88d9-0991939caec2" 00:26:22.074 } 00:26:22.074 ], 00:26:22.074 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:22.074 "serial_number": "SPDK00000000000001", 00:26:22.074 "subtype": "NVMe" 00:26:22.074 } 00:26:22.074 ] 00:26:22.074 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.074 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:22.074 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:22.074 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=95942 00:26:22.074 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:22.074 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:26:22.074 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:22.074 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:22.074 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:26:22.074 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:26:22.074 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:22.074 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:22.074 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:26:22.074 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:26:22.074 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:22.074 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:22.074 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:26:22.074 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:26:22.074 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:22.333 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:22.333 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:22.333 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:26:22.333 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:22.333 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.333 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:22.333 Malloc1 00:26:22.333 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.333 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:22.333 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.333 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:22.333 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.333 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:22.333 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.333 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:22.333 [ 00:26:22.333 { 00:26:22.333 "allow_any_host": true, 00:26:22.333 "hosts": [], 00:26:22.333 "listen_addresses": [], 00:26:22.333 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:22.333 "subtype": "Discovery" 00:26:22.333 }, 00:26:22.333 { 00:26:22.333 "allow_any_host": true, 00:26:22.333 "hosts": [], 00:26:22.333 "listen_addresses": [ 00:26:22.333 { 00:26:22.333 "adrfam": "IPv4", 00:26:22.333 "traddr": "10.0.0.3", 00:26:22.333 "trsvcid": "4420", 00:26:22.333 "trtype": "TCP" 00:26:22.333 } 00:26:22.333 ], 00:26:22.333 "max_cntlid": 65519, 00:26:22.333 "max_namespaces": 2, 00:26:22.333 "min_cntlid": 1, 00:26:22.333 "model_number": "SPDK bdev Controller", 00:26:22.333 "namespaces": [ 00:26:22.333 { 00:26:22.333 "bdev_name": "Malloc0", 00:26:22.333 "name": "Malloc0", 00:26:22.333 "nguid": "DA996FC03D60402A88D90991939CAEC2", 00:26:22.333 "nsid": 1, 00:26:22.333 "uuid": "da996fc0-3d60-402a-88d9-0991939caec2" 00:26:22.333 }, 00:26:22.333 { 00:26:22.333 "bdev_name": "Malloc1", 00:26:22.333 "name": "Malloc1", 00:26:22.333 "nguid": "41E7B3AD32204BBE9C8E62AEADE1CB6F", 00:26:22.333 "nsid": 2, 00:26:22.333 "uuid": "41e7b3ad-3220-4bbe-9c8e-62aeade1cb6f" 00:26:22.333 } 00:26:22.333 ], 00:26:22.333 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:22.333 "serial_number": "SPDK00000000000001", 00:26:22.333 "subtype": "NVMe" 00:26:22.333 } 00:26:22.333 ] 00:26:22.333 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.333 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 95942 00:26:22.592 Asynchronous Event Request test 00:26:22.592 Attaching to 10.0.0.3 00:26:22.592 Attached to 10.0.0.3 00:26:22.592 Registering asynchronous event callbacks... 00:26:22.592 Starting namespace attribute notice tests for all controllers... 00:26:22.592 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:22.592 aer_cb - Changed Namespace 00:26:22.592 Cleaning up... 00:26:22.592 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:22.592 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.592 09:58:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:22.592 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.592 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:22.592 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.592 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:22.850 rmmod nvme_tcp 00:26:22.850 rmmod nvme_fabrics 00:26:22.850 rmmod nvme_keyring 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 95888 ']' 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 95888 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 95888 ']' 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 95888 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95888 00:26:22.850 killing process with pid 95888 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:22.850 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:22.851 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95888' 00:26:22.851 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 95888 00:26:22.851 09:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 95888 00:26:23.787 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:23.787 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:23.787 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:23.787 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:26:23.787 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:23.787 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:26:23.787 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:26:23.787 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:23.787 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:23.787 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:23.787 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:23.787 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:23.787 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:23.787 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:26:24.046 00:26:24.046 real 0m4.023s 00:26:24.046 user 0m10.378s 00:26:24.046 sys 0m1.034s 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:24.046 ************************************ 00:26:24.046 END TEST nvmf_aer 00:26:24.046 ************************************ 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.046 ************************************ 00:26:24.046 START TEST nvmf_async_init 00:26:24.046 ************************************ 00:26:24.046 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:24.306 * Looking for test storage... 00:26:24.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:24.306 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:24.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.306 --rc genhtml_branch_coverage=1 00:26:24.307 --rc genhtml_function_coverage=1 00:26:24.307 --rc genhtml_legend=1 00:26:24.307 --rc geninfo_all_blocks=1 00:26:24.307 --rc geninfo_unexecuted_blocks=1 00:26:24.307 00:26:24.307 ' 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:24.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.307 --rc genhtml_branch_coverage=1 00:26:24.307 --rc genhtml_function_coverage=1 00:26:24.307 --rc genhtml_legend=1 00:26:24.307 --rc geninfo_all_blocks=1 00:26:24.307 --rc geninfo_unexecuted_blocks=1 00:26:24.307 00:26:24.307 ' 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:24.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.307 --rc genhtml_branch_coverage=1 00:26:24.307 --rc genhtml_function_coverage=1 00:26:24.307 --rc genhtml_legend=1 00:26:24.307 --rc geninfo_all_blocks=1 00:26:24.307 --rc geninfo_unexecuted_blocks=1 00:26:24.307 00:26:24.307 ' 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:24.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.307 --rc genhtml_branch_coverage=1 00:26:24.307 --rc genhtml_function_coverage=1 00:26:24.307 --rc genhtml_legend=1 00:26:24.307 --rc geninfo_all_blocks=1 00:26:24.307 --rc geninfo_unexecuted_blocks=1 00:26:24.307 00:26:24.307 ' 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:24.307 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=dd2eac7c090e4c02b741a363d4d4f524 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:24.307 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:24.308 Cannot find device "nvmf_init_br" 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:24.308 Cannot find device "nvmf_init_br2" 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:24.308 Cannot find device "nvmf_tgt_br" 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:24.308 Cannot find device "nvmf_tgt_br2" 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:24.308 Cannot find device "nvmf_init_br" 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:24.308 Cannot find device "nvmf_init_br2" 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:26:24.308 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:24.567 Cannot find device "nvmf_tgt_br" 00:26:24.567 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:26:24.567 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:24.567 Cannot find device "nvmf_tgt_br2" 00:26:24.567 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:26:24.567 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:24.567 Cannot find device "nvmf_br" 00:26:24.567 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:26:24.567 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:24.567 Cannot find device "nvmf_init_if" 00:26:24.567 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:26:24.567 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:24.567 Cannot find device "nvmf_init_if2" 00:26:24.567 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:26:24.567 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:24.567 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:24.567 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:26:24.567 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:24.567 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:24.567 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:26:24.567 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:24.567 09:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:24.567 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:24.827 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:24.827 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:26:24.827 00:26:24.827 --- 10.0.0.3 ping statistics --- 00:26:24.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.827 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:24.827 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:24.827 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:26:24.827 00:26:24.827 --- 10.0.0.4 ping statistics --- 00:26:24.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.827 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:24.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:26:24.827 00:26:24.827 --- 10.0.0.1 ping statistics --- 00:26:24.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.827 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:24.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:26:24.827 00:26:24.827 --- 10.0.0.2 ping statistics --- 00:26:24.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.827 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@461 -- # return 0 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=96179 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 96179 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 96179 ']' 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.827 09:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:24.827 [2024-12-05 09:58:12.432200] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:24.827 [2024-12-05 09:58:12.432373] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.085 [2024-12-05 09:58:12.625021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.344 [2024-12-05 09:58:12.763239] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.344 [2024-12-05 09:58:12.763295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.344 [2024-12-05 09:58:12.763312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.344 [2024-12-05 09:58:12.763342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.344 [2024-12-05 09:58:12.763355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.344 [2024-12-05 09:58:12.764411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.910 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.910 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:26:25.910 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:25.910 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:25.910 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:25.910 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.910 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:25.910 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.910 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:25.910 [2024-12-05 09:58:13.496999] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.910 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.910 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:25.910 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.910 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:25.910 null0 00:26:25.910 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.910 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:25.910 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.910 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:25.910 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.910 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:25.911 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.911 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:25.911 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.911 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g dd2eac7c090e4c02b741a363d4d4f524 00:26:25.911 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.911 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:25.911 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.911 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:25.911 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.911 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:25.911 [2024-12-05 09:58:13.537151] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:26.169 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.170 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:26.170 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.170 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:26.170 nvme0n1 00:26:26.170 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.170 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:26.170 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.170 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:26.170 [ 00:26:26.170 { 00:26:26.170 "aliases": [ 00:26:26.170 "dd2eac7c-090e-4c02-b741-a363d4d4f524" 00:26:26.170 ], 00:26:26.170 "assigned_rate_limits": { 00:26:26.170 "r_mbytes_per_sec": 0, 00:26:26.170 "rw_ios_per_sec": 0, 00:26:26.170 "rw_mbytes_per_sec": 0, 00:26:26.170 "w_mbytes_per_sec": 0 00:26:26.170 }, 00:26:26.170 "block_size": 512, 00:26:26.170 "claimed": false, 00:26:26.170 "driver_specific": { 00:26:26.170 "mp_policy": "active_passive", 00:26:26.170 "nvme": [ 00:26:26.170 { 00:26:26.170 "ctrlr_data": { 00:26:26.170 "ana_reporting": false, 00:26:26.170 "cntlid": 1, 00:26:26.170 "firmware_revision": "25.01", 00:26:26.170 "model_number": "SPDK bdev Controller", 00:26:26.170 "multi_ctrlr": true, 00:26:26.170 "oacs": { 00:26:26.170 "firmware": 0, 00:26:26.170 "format": 0, 00:26:26.170 "ns_manage": 0, 00:26:26.170 "security": 0 00:26:26.170 }, 00:26:26.170 "serial_number": "00000000000000000000", 00:26:26.170 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:26.170 "vendor_id": "0x8086" 00:26:26.170 }, 00:26:26.170 "ns_data": { 00:26:26.170 "can_share": true, 00:26:26.170 "id": 1 00:26:26.170 }, 00:26:26.170 "trid": { 00:26:26.170 "adrfam": "IPv4", 00:26:26.170 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:26.170 "traddr": "10.0.0.3", 00:26:26.170 "trsvcid": "4420", 00:26:26.170 "trtype": "TCP" 00:26:26.170 }, 00:26:26.170 "vs": { 00:26:26.170 "nvme_version": "1.3" 00:26:26.170 } 00:26:26.170 } 00:26:26.170 ] 00:26:26.170 }, 00:26:26.170 "memory_domains": [ 00:26:26.170 { 00:26:26.170 "dma_device_id": "system", 00:26:26.170 "dma_device_type": 1 00:26:26.170 } 00:26:26.170 ], 00:26:26.170 "name": "nvme0n1", 00:26:26.170 "num_blocks": 2097152, 00:26:26.170 "numa_id": -1, 00:26:26.170 "product_name": "NVMe disk", 00:26:26.170 "supported_io_types": { 00:26:26.170 "abort": true, 00:26:26.170 "compare": true, 00:26:26.170 "compare_and_write": true, 00:26:26.170 "copy": true, 00:26:26.170 "flush": true, 00:26:26.170 "get_zone_info": false, 00:26:26.170 "nvme_admin": true, 00:26:26.170 "nvme_io": true, 00:26:26.170 "nvme_io_md": false, 00:26:26.170 "nvme_iov_md": false, 00:26:26.170 "read": true, 00:26:26.170 "reset": true, 00:26:26.170 "seek_data": false, 00:26:26.170 "seek_hole": false, 00:26:26.170 "unmap": false, 00:26:26.170 "write": true, 00:26:26.170 "write_zeroes": true, 00:26:26.170 "zcopy": false, 00:26:26.170 "zone_append": false, 00:26:26.170 "zone_management": false 00:26:26.170 }, 00:26:26.170 "uuid": "dd2eac7c-090e-4c02-b741-a363d4d4f524", 00:26:26.170 "zoned": false 00:26:26.170 } 00:26:26.170 ] 00:26:26.170 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.170 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:26.170 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.170 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:26.428 [2024-12-05 09:58:13.801621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:26.428 [2024-12-05 09:58:13.801728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:26.429 [2024-12-05 09:58:13.934220] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:26.429 [ 00:26:26.429 { 00:26:26.429 "aliases": [ 00:26:26.429 "dd2eac7c-090e-4c02-b741-a363d4d4f524" 00:26:26.429 ], 00:26:26.429 "assigned_rate_limits": { 00:26:26.429 "r_mbytes_per_sec": 0, 00:26:26.429 "rw_ios_per_sec": 0, 00:26:26.429 "rw_mbytes_per_sec": 0, 00:26:26.429 "w_mbytes_per_sec": 0 00:26:26.429 }, 00:26:26.429 "block_size": 512, 00:26:26.429 "claimed": false, 00:26:26.429 "driver_specific": { 00:26:26.429 "mp_policy": "active_passive", 00:26:26.429 "nvme": [ 00:26:26.429 { 00:26:26.429 "ctrlr_data": { 00:26:26.429 "ana_reporting": false, 00:26:26.429 "cntlid": 2, 00:26:26.429 "firmware_revision": "25.01", 00:26:26.429 "model_number": "SPDK bdev Controller", 00:26:26.429 "multi_ctrlr": true, 00:26:26.429 "oacs": { 00:26:26.429 "firmware": 0, 00:26:26.429 "format": 0, 00:26:26.429 "ns_manage": 0, 00:26:26.429 "security": 0 00:26:26.429 }, 00:26:26.429 "serial_number": "00000000000000000000", 00:26:26.429 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:26.429 "vendor_id": "0x8086" 00:26:26.429 }, 00:26:26.429 "ns_data": { 00:26:26.429 "can_share": true, 00:26:26.429 "id": 1 00:26:26.429 }, 00:26:26.429 "trid": { 00:26:26.429 "adrfam": "IPv4", 00:26:26.429 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:26.429 "traddr": "10.0.0.3", 00:26:26.429 "trsvcid": "4420", 00:26:26.429 "trtype": "TCP" 00:26:26.429 }, 00:26:26.429 "vs": { 00:26:26.429 "nvme_version": "1.3" 00:26:26.429 } 00:26:26.429 } 00:26:26.429 ] 00:26:26.429 }, 00:26:26.429 "memory_domains": [ 00:26:26.429 { 00:26:26.429 "dma_device_id": "system", 00:26:26.429 "dma_device_type": 1 00:26:26.429 } 00:26:26.429 ], 00:26:26.429 "name": "nvme0n1", 00:26:26.429 "num_blocks": 2097152, 00:26:26.429 "numa_id": -1, 00:26:26.429 "product_name": "NVMe disk", 00:26:26.429 "supported_io_types": { 00:26:26.429 "abort": true, 00:26:26.429 "compare": true, 00:26:26.429 "compare_and_write": true, 00:26:26.429 "copy": true, 00:26:26.429 "flush": true, 00:26:26.429 "get_zone_info": false, 00:26:26.429 "nvme_admin": true, 00:26:26.429 "nvme_io": true, 00:26:26.429 "nvme_io_md": false, 00:26:26.429 "nvme_iov_md": false, 00:26:26.429 "read": true, 00:26:26.429 "reset": true, 00:26:26.429 "seek_data": false, 00:26:26.429 "seek_hole": false, 00:26:26.429 "unmap": false, 00:26:26.429 "write": true, 00:26:26.429 "write_zeroes": true, 00:26:26.429 "zcopy": false, 00:26:26.429 "zone_append": false, 00:26:26.429 "zone_management": false 00:26:26.429 }, 00:26:26.429 "uuid": "dd2eac7c-090e-4c02-b741-a363d4d4f524", 00:26:26.429 "zoned": false 00:26:26.429 } 00:26:26.429 ] 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.FLa8HxTul8 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.FLa8HxTul8 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.FLa8HxTul8 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.429 09:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:26.429 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.429 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:26:26.429 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.429 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:26.429 [2024-12-05 09:58:14.009903] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:26.429 [2024-12-05 09:58:14.010109] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:26.429 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.429 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:26:26.429 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.429 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:26.429 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.429 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:26.429 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.429 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:26.429 [2024-12-05 09:58:14.025864] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:26.687 nvme0n1 00:26:26.687 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.687 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:26.687 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.687 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:26.687 [ 00:26:26.687 { 00:26:26.687 "aliases": [ 00:26:26.687 "dd2eac7c-090e-4c02-b741-a363d4d4f524" 00:26:26.688 ], 00:26:26.688 "assigned_rate_limits": { 00:26:26.688 "r_mbytes_per_sec": 0, 00:26:26.688 "rw_ios_per_sec": 0, 00:26:26.688 "rw_mbytes_per_sec": 0, 00:26:26.688 "w_mbytes_per_sec": 0 00:26:26.688 }, 00:26:26.688 "block_size": 512, 00:26:26.688 "claimed": false, 00:26:26.688 "driver_specific": { 00:26:26.688 "mp_policy": "active_passive", 00:26:26.688 "nvme": [ 00:26:26.688 { 00:26:26.688 "ctrlr_data": { 00:26:26.688 "ana_reporting": false, 00:26:26.688 "cntlid": 3, 00:26:26.688 "firmware_revision": "25.01", 00:26:26.688 "model_number": "SPDK bdev Controller", 00:26:26.688 "multi_ctrlr": true, 00:26:26.688 "oacs": { 00:26:26.688 "firmware": 0, 00:26:26.688 "format": 0, 00:26:26.688 "ns_manage": 0, 00:26:26.688 "security": 0 00:26:26.688 }, 00:26:26.688 "serial_number": "00000000000000000000", 00:26:26.688 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:26.688 "vendor_id": "0x8086" 00:26:26.688 }, 00:26:26.688 "ns_data": { 00:26:26.688 "can_share": true, 00:26:26.688 "id": 1 00:26:26.688 }, 00:26:26.688 "trid": { 00:26:26.688 "adrfam": "IPv4", 00:26:26.688 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:26.688 "traddr": "10.0.0.3", 00:26:26.688 "trsvcid": "4421", 00:26:26.688 "trtype": "TCP" 00:26:26.688 }, 00:26:26.688 "vs": { 00:26:26.688 "nvme_version": "1.3" 00:26:26.688 } 00:26:26.688 } 00:26:26.688 ] 00:26:26.688 }, 00:26:26.688 "memory_domains": [ 00:26:26.688 { 00:26:26.688 "dma_device_id": "system", 00:26:26.688 "dma_device_type": 1 00:26:26.688 } 00:26:26.688 ], 00:26:26.688 "name": "nvme0n1", 00:26:26.688 "num_blocks": 2097152, 00:26:26.688 "numa_id": -1, 00:26:26.688 "product_name": "NVMe disk", 00:26:26.688 "supported_io_types": { 00:26:26.688 "abort": true, 00:26:26.688 "compare": true, 00:26:26.688 "compare_and_write": true, 00:26:26.688 "copy": true, 00:26:26.688 "flush": true, 00:26:26.688 "get_zone_info": false, 00:26:26.688 "nvme_admin": true, 00:26:26.688 "nvme_io": true, 00:26:26.688 "nvme_io_md": false, 00:26:26.688 "nvme_iov_md": false, 00:26:26.688 "read": true, 00:26:26.688 "reset": true, 00:26:26.688 "seek_data": false, 00:26:26.688 "seek_hole": false, 00:26:26.688 "unmap": false, 00:26:26.688 "write": true, 00:26:26.688 "write_zeroes": true, 00:26:26.688 "zcopy": false, 00:26:26.688 "zone_append": false, 00:26:26.688 "zone_management": false 00:26:26.688 }, 00:26:26.688 "uuid": "dd2eac7c-090e-4c02-b741-a363d4d4f524", 00:26:26.688 "zoned": false 00:26:26.688 } 00:26:26.688 ] 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.FLa8HxTul8 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:26.688 rmmod nvme_tcp 00:26:26.688 rmmod nvme_fabrics 00:26:26.688 rmmod nvme_keyring 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 96179 ']' 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 96179 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 96179 ']' 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 96179 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96179 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:26.688 killing process with pid 96179 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96179' 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 96179 00:26:26.688 09:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 96179 00:26:27.619 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:27.619 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:27.619 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:27.619 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:26:27.619 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:26:27.619 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:27.619 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:26:27.619 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:27.619 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:27.619 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:27.619 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:27.619 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:27.620 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:27.620 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:27.620 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:27.877 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:27.877 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:27.877 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:27.877 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:27.877 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:27.877 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:27.877 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:27.877 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:27.877 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.877 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.877 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.877 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:26:27.877 00:26:27.877 real 0m3.773s 00:26:27.877 user 0m3.280s 00:26:27.877 sys 0m0.896s 00:26:27.877 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:27.877 09:58:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:27.877 ************************************ 00:26:27.877 END TEST nvmf_async_init 00:26:27.877 ************************************ 00:26:27.877 09:58:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:27.877 09:58:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:27.877 09:58:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:27.877 09:58:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.877 ************************************ 00:26:27.877 START TEST dma 00:26:27.877 ************************************ 00:26:27.877 09:58:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:28.136 * Looking for test storage... 00:26:28.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:28.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.136 --rc genhtml_branch_coverage=1 00:26:28.136 --rc genhtml_function_coverage=1 00:26:28.136 --rc genhtml_legend=1 00:26:28.136 --rc geninfo_all_blocks=1 00:26:28.136 --rc geninfo_unexecuted_blocks=1 00:26:28.136 00:26:28.136 ' 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:28.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.136 --rc genhtml_branch_coverage=1 00:26:28.136 --rc genhtml_function_coverage=1 00:26:28.136 --rc genhtml_legend=1 00:26:28.136 --rc geninfo_all_blocks=1 00:26:28.136 --rc geninfo_unexecuted_blocks=1 00:26:28.136 00:26:28.136 ' 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:28.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.136 --rc genhtml_branch_coverage=1 00:26:28.136 --rc genhtml_function_coverage=1 00:26:28.136 --rc genhtml_legend=1 00:26:28.136 --rc geninfo_all_blocks=1 00:26:28.136 --rc geninfo_unexecuted_blocks=1 00:26:28.136 00:26:28.136 ' 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:28.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.136 --rc genhtml_branch_coverage=1 00:26:28.136 --rc genhtml_function_coverage=1 00:26:28.136 --rc genhtml_legend=1 00:26:28.136 --rc geninfo_all_blocks=1 00:26:28.136 --rc geninfo_unexecuted_blocks=1 00:26:28.136 00:26:28.136 ' 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:28.136 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:26:28.136 ************************************ 00:26:28.136 END TEST dma 00:26:28.136 ************************************ 00:26:28.136 00:26:28.136 real 0m0.213s 00:26:28.136 user 0m0.127s 00:26:28.136 sys 0m0.101s 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:28.136 09:58:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:28.137 09:58:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.137 ************************************ 00:26:28.137 START TEST nvmf_identify 00:26:28.137 ************************************ 00:26:28.137 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:28.395 * Looking for test storage... 00:26:28.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:28.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.395 --rc genhtml_branch_coverage=1 00:26:28.395 --rc genhtml_function_coverage=1 00:26:28.395 --rc genhtml_legend=1 00:26:28.395 --rc geninfo_all_blocks=1 00:26:28.395 --rc geninfo_unexecuted_blocks=1 00:26:28.395 00:26:28.395 ' 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:28.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.395 --rc genhtml_branch_coverage=1 00:26:28.395 --rc genhtml_function_coverage=1 00:26:28.395 --rc genhtml_legend=1 00:26:28.395 --rc geninfo_all_blocks=1 00:26:28.395 --rc geninfo_unexecuted_blocks=1 00:26:28.395 00:26:28.395 ' 00:26:28.395 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:28.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.396 --rc genhtml_branch_coverage=1 00:26:28.396 --rc genhtml_function_coverage=1 00:26:28.396 --rc genhtml_legend=1 00:26:28.396 --rc geninfo_all_blocks=1 00:26:28.396 --rc geninfo_unexecuted_blocks=1 00:26:28.396 00:26:28.396 ' 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:28.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.396 --rc genhtml_branch_coverage=1 00:26:28.396 --rc genhtml_function_coverage=1 00:26:28.396 --rc genhtml_legend=1 00:26:28.396 --rc geninfo_all_blocks=1 00:26:28.396 --rc geninfo_unexecuted_blocks=1 00:26:28.396 00:26:28.396 ' 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:28.396 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:28.396 Cannot find device "nvmf_init_br" 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:28.396 Cannot find device "nvmf_init_br2" 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:28.396 Cannot find device "nvmf_tgt_br" 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:28.396 Cannot find device "nvmf_tgt_br2" 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:26:28.396 09:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:28.396 Cannot find device "nvmf_init_br" 00:26:28.396 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:26:28.396 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:28.396 Cannot find device "nvmf_init_br2" 00:26:28.396 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:26:28.396 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:28.655 Cannot find device "nvmf_tgt_br" 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:28.655 Cannot find device "nvmf_tgt_br2" 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:28.655 Cannot find device "nvmf_br" 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:28.655 Cannot find device "nvmf_init_if" 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:28.655 Cannot find device "nvmf_init_if2" 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:28.655 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:28.655 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:28.655 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:28.914 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:28.914 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:26:28.914 00:26:28.914 --- 10.0.0.3 ping statistics --- 00:26:28.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.914 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:28.914 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:28.914 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:26:28.914 00:26:28.914 --- 10.0.0.4 ping statistics --- 00:26:28.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.914 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:28.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:26:28.914 00:26:28.914 --- 10.0.0.1 ping statistics --- 00:26:28.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.914 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:28.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:26:28.914 00:26:28.914 --- 10.0.0.2 ping statistics --- 00:26:28.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.914 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=96524 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 96524 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 96524 ']' 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.914 09:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:29.173 [2024-12-05 09:58:16.558340] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:29.173 [2024-12-05 09:58:16.558531] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.173 [2024-12-05 09:58:16.750872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:29.433 [2024-12-05 09:58:16.856681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.433 [2024-12-05 09:58:16.856745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.433 [2024-12-05 09:58:16.856781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.433 [2024-12-05 09:58:16.856792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.433 [2024-12-05 09:58:16.856816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.433 [2024-12-05 09:58:16.858848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.433 [2024-12-05 09:58:16.859031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.433 [2024-12-05 09:58:16.859136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.433 [2024-12-05 09:58:16.859166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:30.000 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:30.000 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:26:30.000 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:30.000 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.000 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:30.000 [2024-12-05 09:58:17.528193] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.000 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.000 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:30.000 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:30.000 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:30.000 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:30.000 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.000 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:30.258 Malloc0 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:30.258 [2024-12-05 09:58:17.686235] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:30.258 [ 00:26:30.258 { 00:26:30.258 "allow_any_host": true, 00:26:30.258 "hosts": [], 00:26:30.258 "listen_addresses": [ 00:26:30.258 { 00:26:30.258 "adrfam": "IPv4", 00:26:30.258 "traddr": "10.0.0.3", 00:26:30.258 "trsvcid": "4420", 00:26:30.258 "trtype": "TCP" 00:26:30.258 } 00:26:30.258 ], 00:26:30.258 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:30.258 "subtype": "Discovery" 00:26:30.258 }, 00:26:30.258 { 00:26:30.258 "allow_any_host": true, 00:26:30.258 "hosts": [], 00:26:30.258 "listen_addresses": [ 00:26:30.258 { 00:26:30.258 "adrfam": "IPv4", 00:26:30.258 "traddr": "10.0.0.3", 00:26:30.258 "trsvcid": "4420", 00:26:30.258 "trtype": "TCP" 00:26:30.258 } 00:26:30.258 ], 00:26:30.258 "max_cntlid": 65519, 00:26:30.258 "max_namespaces": 32, 00:26:30.258 "min_cntlid": 1, 00:26:30.258 "model_number": "SPDK bdev Controller", 00:26:30.258 "namespaces": [ 00:26:30.258 { 00:26:30.258 "bdev_name": "Malloc0", 00:26:30.258 "eui64": "ABCDEF0123456789", 00:26:30.258 "name": "Malloc0", 00:26:30.258 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:30.258 "nsid": 1, 00:26:30.258 "uuid": "a73e543b-cc39-47c5-add4-984707962d94" 00:26:30.258 } 00:26:30.258 ], 00:26:30.258 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.258 "serial_number": "SPDK00000000000001", 00:26:30.258 "subtype": "NVMe" 00:26:30.258 } 00:26:30.258 ] 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.258 09:58:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:30.258 [2024-12-05 09:58:17.777428] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:30.258 [2024-12-05 09:58:17.777542] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96573 ] 00:26:30.517 [2024-12-05 09:58:17.965040] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:26:30.518 [2024-12-05 09:58:17.965176] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:30.518 [2024-12-05 09:58:17.965190] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:30.518 [2024-12-05 09:58:17.965212] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:30.518 [2024-12-05 09:58:17.965229] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:30.518 [2024-12-05 09:58:17.965640] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:26:30.518 [2024-12-05 09:58:17.965734] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:26:30.518 [2024-12-05 09:58:17.969985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:30.518 [2024-12-05 09:58:17.970034] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:30.518 [2024-12-05 09:58:17.970047] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:30.518 [2024-12-05 09:58:17.970054] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:30.518 [2024-12-05 09:58:17.970139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.970185] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.970199] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.518 [2024-12-05 09:58:17.970230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:30.518 [2024-12-05 09:58:17.970309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.518 [2024-12-05 09:58:17.976998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.518 [2024-12-05 09:58:17.977052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.518 [2024-12-05 09:58:17.977062] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.977070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.518 [2024-12-05 09:58:17.977087] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:30.518 [2024-12-05 09:58:17.977108] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:26:30.518 [2024-12-05 09:58:17.977118] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:26:30.518 [2024-12-05 09:58:17.977140] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.977150] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.977156] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.518 [2024-12-05 09:58:17.977171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.518 [2024-12-05 09:58:17.977208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.518 [2024-12-05 09:58:17.977351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.518 [2024-12-05 09:58:17.977370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.518 [2024-12-05 09:58:17.977380] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.977387] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.518 [2024-12-05 09:58:17.977400] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:26:30.518 [2024-12-05 09:58:17.977414] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:26:30.518 [2024-12-05 09:58:17.977427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.977434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.977441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.518 [2024-12-05 09:58:17.977458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.518 [2024-12-05 09:58:17.977488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.518 [2024-12-05 09:58:17.977583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.518 [2024-12-05 09:58:17.977595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.518 [2024-12-05 09:58:17.977601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.977607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.518 [2024-12-05 09:58:17.977617] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:26:30.518 [2024-12-05 09:58:17.977635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:26:30.518 [2024-12-05 09:58:17.977652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.977660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.977667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.518 [2024-12-05 09:58:17.977680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.518 [2024-12-05 09:58:17.977721] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.518 [2024-12-05 09:58:17.977801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.518 [2024-12-05 09:58:17.977812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.518 [2024-12-05 09:58:17.977818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.977825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.518 [2024-12-05 09:58:17.977834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:30.518 [2024-12-05 09:58:17.977851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.977860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.977870] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.518 [2024-12-05 09:58:17.977884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.518 [2024-12-05 09:58:17.977911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.518 [2024-12-05 09:58:17.978011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.518 [2024-12-05 09:58:17.978025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.518 [2024-12-05 09:58:17.978031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.978037] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.518 [2024-12-05 09:58:17.978046] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:26:30.518 [2024-12-05 09:58:17.978055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:26:30.518 [2024-12-05 09:58:17.978071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:30.518 [2024-12-05 09:58:17.978182] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:26:30.518 [2024-12-05 09:58:17.978203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:30.518 [2024-12-05 09:58:17.978219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.978227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.978239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.518 [2024-12-05 09:58:17.978253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.518 [2024-12-05 09:58:17.978308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.518 [2024-12-05 09:58:17.978416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.518 [2024-12-05 09:58:17.978429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.518 [2024-12-05 09:58:17.978434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.978441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.518 [2024-12-05 09:58:17.978451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:30.518 [2024-12-05 09:58:17.978470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.978494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.978504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.518 [2024-12-05 09:58:17.978518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.518 [2024-12-05 09:58:17.978545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.518 [2024-12-05 09:58:17.978648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.518 [2024-12-05 09:58:17.978663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.518 [2024-12-05 09:58:17.978669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.978676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.518 [2024-12-05 09:58:17.978684] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:30.518 [2024-12-05 09:58:17.978697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:26:30.518 [2024-12-05 09:58:17.978723] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:26:30.518 [2024-12-05 09:58:17.978743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:26:30.518 [2024-12-05 09:58:17.978763] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.978772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.518 [2024-12-05 09:58:17.978786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.518 [2024-12-05 09:58:17.978817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.518 [2024-12-05 09:58:17.978957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:30.518 [2024-12-05 09:58:17.978988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:30.518 [2024-12-05 09:58:17.978997] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.979004] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:26:30.518 [2024-12-05 09:58:17.979013] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:26:30.518 [2024-12-05 09:58:17.979021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.979035] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.979051] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.979064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.518 [2024-12-05 09:58:17.979077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.518 [2024-12-05 09:58:17.979084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.979090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.518 [2024-12-05 09:58:17.979108] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:26:30.518 [2024-12-05 09:58:17.979118] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:26:30.518 [2024-12-05 09:58:17.979126] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:26:30.518 [2024-12-05 09:58:17.979135] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:26:30.518 [2024-12-05 09:58:17.979143] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:26:30.518 [2024-12-05 09:58:17.979152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:26:30.518 [2024-12-05 09:58:17.979171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:26:30.518 [2024-12-05 09:58:17.979187] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.979198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.979205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.518 [2024-12-05 09:58:17.979220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:30.518 [2024-12-05 09:58:17.979258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.518 [2024-12-05 09:58:17.979345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.518 [2024-12-05 09:58:17.979356] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.518 [2024-12-05 09:58:17.979362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.979372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.518 [2024-12-05 09:58:17.979386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.979393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.979400] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.518 [2024-12-05 09:58:17.979416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.518 [2024-12-05 09:58:17.979427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.979433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.979439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:26:30.518 [2024-12-05 09:58:17.979448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.518 [2024-12-05 09:58:17.979457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.979463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.979469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:26:30.518 [2024-12-05 09:58:17.979481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.518 [2024-12-05 09:58:17.979490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.979496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.979502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:26:30.518 [2024-12-05 09:58:17.979511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.518 [2024-12-05 09:58:17.979519] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:30.518 [2024-12-05 09:58:17.979549] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:30.518 [2024-12-05 09:58:17.979564] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.979575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:26:30.518 [2024-12-05 09:58:17.979588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.518 [2024-12-05 09:58:17.979621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.518 [2024-12-05 09:58:17.979633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:26:30.518 [2024-12-05 09:58:17.979640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:26:30.518 [2024-12-05 09:58:17.979654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:26:30.518 [2024-12-05 09:58:17.979662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:26:30.518 [2024-12-05 09:58:17.979791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.518 [2024-12-05 09:58:17.979814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.518 [2024-12-05 09:58:17.979821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.979828] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:26:30.518 [2024-12-05 09:58:17.979838] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:26:30.518 [2024-12-05 09:58:17.979848] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:26:30.518 [2024-12-05 09:58:17.979871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.979881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:26:30.518 [2024-12-05 09:58:17.979894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.518 [2024-12-05 09:58:17.979922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:26:30.518 [2024-12-05 09:58:17.980090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:30.518 [2024-12-05 09:58:17.980118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:30.518 [2024-12-05 09:58:17.980126] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.980134] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:26:30.518 [2024-12-05 09:58:17.980142] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:26:30.518 [2024-12-05 09:58:17.980152] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.980165] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.980173] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.980187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.518 [2024-12-05 09:58:17.980197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.518 [2024-12-05 09:58:17.980203] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.980210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:26:30.518 [2024-12-05 09:58:17.980249] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:26:30.518 [2024-12-05 09:58:17.980314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.980341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:26:30.518 [2024-12-05 09:58:17.980356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.518 [2024-12-05 09:58:17.980367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.980374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.518 [2024-12-05 09:58:17.980380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:26:30.518 [2024-12-05 09:58:17.980396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.518 [2024-12-05 09:58:17.980435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:26:30.519 [2024-12-05 09:58:17.980448] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:26:30.519 [2024-12-05 09:58:17.980695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:30.519 [2024-12-05 09:58:17.980728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:30.519 [2024-12-05 09:58:17.980736] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:17.980743] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:26:30.519 [2024-12-05 09:58:17.980752] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:26:30.519 [2024-12-05 09:58:17.980759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:17.980774] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:17.980781] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:17.980790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.519 [2024-12-05 09:58:17.980799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.519 [2024-12-05 09:58:17.980805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:17.980811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:26:30.519 [2024-12-05 09:58:18.024070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.519 [2024-12-05 09:58:18.024100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.519 [2024-12-05 09:58:18.024109] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.024117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:26:30.519 [2024-12-05 09:58:18.024157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.024171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:26:30.519 [2024-12-05 09:58:18.024208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.519 [2024-12-05 09:58:18.024254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:26:30.519 [2024-12-05 09:58:18.024465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:30.519 [2024-12-05 09:58:18.024478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:30.519 [2024-12-05 09:58:18.024484] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.024491] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:26:30.519 [2024-12-05 09:58:18.024499] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:26:30.519 [2024-12-05 09:58:18.024506] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.024519] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.024537] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.024550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.519 [2024-12-05 09:58:18.024560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.519 [2024-12-05 09:58:18.024565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.024572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:26:30.519 [2024-12-05 09:58:18.024599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.024610] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:26:30.519 [2024-12-05 09:58:18.024624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.519 [2024-12-05 09:58:18.024662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:26:30.519 [2024-12-05 09:58:18.024798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:30.519 [2024-12-05 09:58:18.024810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:30.519 [2024-12-05 09:58:18.024816] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.024822] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:26:30.519 [2024-12-05 09:58:18.024830] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:26:30.519 [2024-12-05 09:58:18.024837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.024848] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.024854] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.065105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.519 [2024-12-05 09:58:18.065138] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.519 [2024-12-05 09:58:18.065147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.065154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:26:30.519 ===================================================== 00:26:30.519 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:30.519 ===================================================== 00:26:30.519 Controller Capabilities/Features 00:26:30.519 ================================ 00:26:30.519 Vendor ID: 0000 00:26:30.519 Subsystem Vendor ID: 0000 00:26:30.519 Serial Number: .................... 00:26:30.519 Model Number: ........................................ 00:26:30.519 Firmware Version: 25.01 00:26:30.519 Recommended Arb Burst: 0 00:26:30.519 IEEE OUI Identifier: 00 00 00 00:26:30.519 Multi-path I/O 00:26:30.519 May have multiple subsystem ports: No 00:26:30.519 May have multiple controllers: No 00:26:30.519 Associated with SR-IOV VF: No 00:26:30.519 Max Data Transfer Size: 131072 00:26:30.519 Max Number of Namespaces: 0 00:26:30.519 Max Number of I/O Queues: 1024 00:26:30.519 NVMe Specification Version (VS): 1.3 00:26:30.519 NVMe Specification Version (Identify): 1.3 00:26:30.519 Maximum Queue Entries: 128 00:26:30.519 Contiguous Queues Required: Yes 00:26:30.519 Arbitration Mechanisms Supported 00:26:30.519 Weighted Round Robin: Not Supported 00:26:30.519 Vendor Specific: Not Supported 00:26:30.519 Reset Timeout: 15000 ms 00:26:30.519 Doorbell Stride: 4 bytes 00:26:30.519 NVM Subsystem Reset: Not Supported 00:26:30.519 Command Sets Supported 00:26:30.519 NVM Command Set: Supported 00:26:30.519 Boot Partition: Not Supported 00:26:30.519 Memory Page Size Minimum: 4096 bytes 00:26:30.519 Memory Page Size Maximum: 4096 bytes 00:26:30.519 Persistent Memory Region: Not Supported 00:26:30.519 Optional Asynchronous Events Supported 00:26:30.519 Namespace Attribute Notices: Not Supported 00:26:30.519 Firmware Activation Notices: Not Supported 00:26:30.519 ANA Change Notices: Not Supported 00:26:30.519 PLE Aggregate Log Change Notices: Not Supported 00:26:30.519 LBA Status Info Alert Notices: Not Supported 00:26:30.519 EGE Aggregate Log Change Notices: Not Supported 00:26:30.519 Normal NVM Subsystem Shutdown event: Not Supported 00:26:30.519 Zone Descriptor Change Notices: Not Supported 00:26:30.519 Discovery Log Change Notices: Supported 00:26:30.519 Controller Attributes 00:26:30.519 128-bit Host Identifier: Not Supported 00:26:30.519 Non-Operational Permissive Mode: Not Supported 00:26:30.519 NVM Sets: Not Supported 00:26:30.519 Read Recovery Levels: Not Supported 00:26:30.519 Endurance Groups: Not Supported 00:26:30.519 Predictable Latency Mode: Not Supported 00:26:30.519 Traffic Based Keep ALive: Not Supported 00:26:30.519 Namespace Granularity: Not Supported 00:26:30.519 SQ Associations: Not Supported 00:26:30.519 UUID List: Not Supported 00:26:30.519 Multi-Domain Subsystem: Not Supported 00:26:30.519 Fixed Capacity Management: Not Supported 00:26:30.519 Variable Capacity Management: Not Supported 00:26:30.519 Delete Endurance Group: Not Supported 00:26:30.519 Delete NVM Set: Not Supported 00:26:30.519 Extended LBA Formats Supported: Not Supported 00:26:30.519 Flexible Data Placement Supported: Not Supported 00:26:30.519 00:26:30.519 Controller Memory Buffer Support 00:26:30.519 ================================ 00:26:30.519 Supported: No 00:26:30.519 00:26:30.519 Persistent Memory Region Support 00:26:30.519 ================================ 00:26:30.519 Supported: No 00:26:30.519 00:26:30.519 Admin Command Set Attributes 00:26:30.519 ============================ 00:26:30.519 Security Send/Receive: Not Supported 00:26:30.519 Format NVM: Not Supported 00:26:30.519 Firmware Activate/Download: Not Supported 00:26:30.519 Namespace Management: Not Supported 00:26:30.519 Device Self-Test: Not Supported 00:26:30.519 Directives: Not Supported 00:26:30.519 NVMe-MI: Not Supported 00:26:30.519 Virtualization Management: Not Supported 00:26:30.519 Doorbell Buffer Config: Not Supported 00:26:30.519 Get LBA Status Capability: Not Supported 00:26:30.519 Command & Feature Lockdown Capability: Not Supported 00:26:30.519 Abort Command Limit: 1 00:26:30.519 Async Event Request Limit: 4 00:26:30.519 Number of Firmware Slots: N/A 00:26:30.519 Firmware Slot 1 Read-Only: N/A 00:26:30.519 Firmware Activation Without Reset: N/A 00:26:30.519 Multiple Update Detection Support: N/A 00:26:30.519 Firmware Update Granularity: No Information Provided 00:26:30.519 Per-Namespace SMART Log: No 00:26:30.519 Asymmetric Namespace Access Log Page: Not Supported 00:26:30.519 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:30.519 Command Effects Log Page: Not Supported 00:26:30.519 Get Log Page Extended Data: Supported 00:26:30.519 Telemetry Log Pages: Not Supported 00:26:30.519 Persistent Event Log Pages: Not Supported 00:26:30.519 Supported Log Pages Log Page: May Support 00:26:30.519 Commands Supported & Effects Log Page: Not Supported 00:26:30.519 Feature Identifiers & Effects Log Page:May Support 00:26:30.519 NVMe-MI Commands & Effects Log Page: May Support 00:26:30.519 Data Area 4 for Telemetry Log: Not Supported 00:26:30.519 Error Log Page Entries Supported: 128 00:26:30.519 Keep Alive: Not Supported 00:26:30.519 00:26:30.519 NVM Command Set Attributes 00:26:30.519 ========================== 00:26:30.519 Submission Queue Entry Size 00:26:30.519 Max: 1 00:26:30.519 Min: 1 00:26:30.519 Completion Queue Entry Size 00:26:30.519 Max: 1 00:26:30.519 Min: 1 00:26:30.519 Number of Namespaces: 0 00:26:30.519 Compare Command: Not Supported 00:26:30.519 Write Uncorrectable Command: Not Supported 00:26:30.519 Dataset Management Command: Not Supported 00:26:30.519 Write Zeroes Command: Not Supported 00:26:30.519 Set Features Save Field: Not Supported 00:26:30.519 Reservations: Not Supported 00:26:30.519 Timestamp: Not Supported 00:26:30.519 Copy: Not Supported 00:26:30.519 Volatile Write Cache: Not Present 00:26:30.519 Atomic Write Unit (Normal): 1 00:26:30.519 Atomic Write Unit (PFail): 1 00:26:30.519 Atomic Compare & Write Unit: 1 00:26:30.519 Fused Compare & Write: Supported 00:26:30.519 Scatter-Gather List 00:26:30.519 SGL Command Set: Supported 00:26:30.519 SGL Keyed: Supported 00:26:30.519 SGL Bit Bucket Descriptor: Not Supported 00:26:30.519 SGL Metadata Pointer: Not Supported 00:26:30.519 Oversized SGL: Not Supported 00:26:30.519 SGL Metadata Address: Not Supported 00:26:30.519 SGL Offset: Supported 00:26:30.519 Transport SGL Data Block: Not Supported 00:26:30.519 Replay Protected Memory Block: Not Supported 00:26:30.519 00:26:30.519 Firmware Slot Information 00:26:30.519 ========================= 00:26:30.519 Active slot: 0 00:26:30.519 00:26:30.519 00:26:30.519 Error Log 00:26:30.519 ========= 00:26:30.519 00:26:30.519 Active Namespaces 00:26:30.519 ================= 00:26:30.519 Discovery Log Page 00:26:30.519 ================== 00:26:30.519 Generation Counter: 2 00:26:30.519 Number of Records: 2 00:26:30.519 Record Format: 0 00:26:30.519 00:26:30.519 Discovery Log Entry 0 00:26:30.519 ---------------------- 00:26:30.519 Transport Type: 3 (TCP) 00:26:30.519 Address Family: 1 (IPv4) 00:26:30.519 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:30.519 Entry Flags: 00:26:30.519 Duplicate Returned Information: 1 00:26:30.519 Explicit Persistent Connection Support for Discovery: 1 00:26:30.519 Transport Requirements: 00:26:30.519 Secure Channel: Not Required 00:26:30.519 Port ID: 0 (0x0000) 00:26:30.519 Controller ID: 65535 (0xffff) 00:26:30.519 Admin Max SQ Size: 128 00:26:30.519 Transport Service Identifier: 4420 00:26:30.519 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:30.519 Transport Address: 10.0.0.3 00:26:30.519 Discovery Log Entry 1 00:26:30.519 ---------------------- 00:26:30.519 Transport Type: 3 (TCP) 00:26:30.519 Address Family: 1 (IPv4) 00:26:30.519 Subsystem Type: 2 (NVM Subsystem) 00:26:30.519 Entry Flags: 00:26:30.519 Duplicate Returned Information: 0 00:26:30.519 Explicit Persistent Connection Support for Discovery: 0 00:26:30.519 Transport Requirements: 00:26:30.519 Secure Channel: Not Required 00:26:30.519 Port ID: 0 (0x0000) 00:26:30.519 Controller ID: 65535 (0xffff) 00:26:30.519 Admin Max SQ Size: 128 00:26:30.519 Transport Service Identifier: 4420 00:26:30.519 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:30.519 Transport Address: 10.0.0.3 [2024-12-05 09:58:18.065334] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:26:30.519 [2024-12-05 09:58:18.065390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.519 [2024-12-05 09:58:18.065405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.519 [2024-12-05 09:58:18.065415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:26:30.519 [2024-12-05 09:58:18.065423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.519 [2024-12-05 09:58:18.065431] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:26:30.519 [2024-12-05 09:58:18.065439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.519 [2024-12-05 09:58:18.065446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:26:30.519 [2024-12-05 09:58:18.065454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.519 [2024-12-05 09:58:18.065468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.065493] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.065500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:26:30.519 [2024-12-05 09:58:18.065514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.519 [2024-12-05 09:58:18.065552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:26:30.519 [2024-12-05 09:58:18.065642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.519 [2024-12-05 09:58:18.065655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.519 [2024-12-05 09:58:18.065661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.065668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:26:30.519 [2024-12-05 09:58:18.065688] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.065696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.065707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:26:30.519 [2024-12-05 09:58:18.065720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.519 [2024-12-05 09:58:18.065755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:26:30.519 [2024-12-05 09:58:18.065881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.519 [2024-12-05 09:58:18.065893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.519 [2024-12-05 09:58:18.065899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.065905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:26:30.519 [2024-12-05 09:58:18.065914] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:26:30.519 [2024-12-05 09:58:18.065922] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:26:30.519 [2024-12-05 09:58:18.065939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.065948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.065954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:26:30.519 [2024-12-05 09:58:18.065992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.519 [2024-12-05 09:58:18.066025] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:26:30.519 [2024-12-05 09:58:18.066109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.519 [2024-12-05 09:58:18.066120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.519 [2024-12-05 09:58:18.066126] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.066132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:26:30.519 [2024-12-05 09:58:18.066150] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.066188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.519 [2024-12-05 09:58:18.066194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:26:30.520 [2024-12-05 09:58:18.066207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-05 09:58:18.066234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:26:30.520 [2024-12-05 09:58:18.066329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.520 [2024-12-05 09:58:18.066340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.520 [2024-12-05 09:58:18.066346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.520 [2024-12-05 09:58:18.066353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:26:30.520 [2024-12-05 09:58:18.066369] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.520 [2024-12-05 09:58:18.066377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.520 [2024-12-05 09:58:18.066383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:26:30.520 [2024-12-05 09:58:18.066395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-05 09:58:18.066421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:26:30.520 [2024-12-05 09:58:18.066537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.520 [2024-12-05 09:58:18.066549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.520 [2024-12-05 09:58:18.066554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.520 [2024-12-05 09:58:18.066561] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:26:30.520 [2024-12-05 09:58:18.066577] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.520 [2024-12-05 09:58:18.066585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.520 [2024-12-05 09:58:18.066591] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:26:30.520 [2024-12-05 09:58:18.066602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-05 09:58:18.066627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:26:30.520 [2024-12-05 09:58:18.066731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.520 [2024-12-05 09:58:18.066743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.520 [2024-12-05 09:58:18.066749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.520 [2024-12-05 09:58:18.066771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:26:30.520 [2024-12-05 09:58:18.066788] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.520 [2024-12-05 09:58:18.066796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.520 [2024-12-05 09:58:18.066802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:26:30.520 [2024-12-05 09:58:18.066815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-05 09:58:18.066840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:26:30.520 [2024-12-05 09:58:18.066921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.520 [2024-12-05 09:58:18.066933] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.520 [2024-12-05 09:58:18.066939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.520 [2024-12-05 09:58:18.066945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:26:30.520 [2024-12-05 09:58:18.066962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.520 [2024-12-05 09:58:18.066970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.520 [2024-12-05 09:58:18.066976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:26:30.520 [2024-12-05 09:58:18.066988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-05 09:58:18.071052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:26:30.520 [2024-12-05 09:58:18.071106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.520 [2024-12-05 09:58:18.071135] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.520 [2024-12-05 09:58:18.071141] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.520 [2024-12-05 09:58:18.071148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:26:30.520 [2024-12-05 09:58:18.071170] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.520 [2024-12-05 09:58:18.071179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.520 [2024-12-05 09:58:18.071185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:26:30.520 [2024-12-05 09:58:18.071200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.520 [2024-12-05 09:58:18.071249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:26:30.520 [2024-12-05 09:58:18.071369] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.520 [2024-12-05 09:58:18.071382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.520 [2024-12-05 09:58:18.071388] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.520 [2024-12-05 09:58:18.071395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:26:30.520 [2024-12-05 09:58:18.071410] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:26:30.520 00:26:30.520 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:30.781 [2024-12-05 09:58:18.189110] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:30.781 [2024-12-05 09:58:18.189246] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96587 ] 00:26:30.781 [2024-12-05 09:58:18.375203] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:26:30.781 [2024-12-05 09:58:18.375328] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:30.781 [2024-12-05 09:58:18.375341] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:30.781 [2024-12-05 09:58:18.375365] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:30.781 [2024-12-05 09:58:18.375378] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:30.782 [2024-12-05 09:58:18.375764] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:26:30.782 [2024-12-05 09:58:18.375833] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:26:30.782 [2024-12-05 09:58:18.383050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:30.782 [2024-12-05 09:58:18.383110] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:30.782 [2024-12-05 09:58:18.383120] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:30.782 [2024-12-05 09:58:18.383129] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:30.782 [2024-12-05 09:58:18.383210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.383231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.383239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.782 [2024-12-05 09:58:18.383266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:30.782 [2024-12-05 09:58:18.383327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.782 [2024-12-05 09:58:18.390992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.782 [2024-12-05 09:58:18.391044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.782 [2024-12-05 09:58:18.391053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.391061] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.782 [2024-12-05 09:58:18.391081] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:30.782 [2024-12-05 09:58:18.391097] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:26:30.782 [2024-12-05 09:58:18.391107] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:26:30.782 [2024-12-05 09:58:18.391130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.391144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.391150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.782 [2024-12-05 09:58:18.391165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-05 09:58:18.391202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.782 [2024-12-05 09:58:18.391336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.782 [2024-12-05 09:58:18.391348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.782 [2024-12-05 09:58:18.391354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.391361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.782 [2024-12-05 09:58:18.391374] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:26:30.782 [2024-12-05 09:58:18.391390] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:26:30.782 [2024-12-05 09:58:18.391401] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.391412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.391419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.782 [2024-12-05 09:58:18.391434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-05 09:58:18.391464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.782 [2024-12-05 09:58:18.391556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.782 [2024-12-05 09:58:18.391571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.782 [2024-12-05 09:58:18.391577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.391583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.782 [2024-12-05 09:58:18.391593] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:26:30.782 [2024-12-05 09:58:18.391607] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:26:30.782 [2024-12-05 09:58:18.391618] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.391626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.391636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.782 [2024-12-05 09:58:18.391649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-05 09:58:18.391677] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.782 [2024-12-05 09:58:18.391769] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.782 [2024-12-05 09:58:18.391780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.782 [2024-12-05 09:58:18.391786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.391792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.782 [2024-12-05 09:58:18.391802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:30.782 [2024-12-05 09:58:18.391818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.391832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.391840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.782 [2024-12-05 09:58:18.391852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-05 09:58:18.391884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.782 [2024-12-05 09:58:18.391998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.782 [2024-12-05 09:58:18.392011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.782 [2024-12-05 09:58:18.392017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.392024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.782 [2024-12-05 09:58:18.392033] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:26:30.782 [2024-12-05 09:58:18.392041] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:26:30.782 [2024-12-05 09:58:18.392058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:30.782 [2024-12-05 09:58:18.392167] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:26:30.782 [2024-12-05 09:58:18.392175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:30.782 [2024-12-05 09:58:18.392189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.392196] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.392209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.782 [2024-12-05 09:58:18.392223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-05 09:58:18.392252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.782 [2024-12-05 09:58:18.392354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.782 [2024-12-05 09:58:18.392369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.782 [2024-12-05 09:58:18.392375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.392381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.782 [2024-12-05 09:58:18.392391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:30.782 [2024-12-05 09:58:18.392410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.392418] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.392424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.782 [2024-12-05 09:58:18.392437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-05 09:58:18.392467] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.782 [2024-12-05 09:58:18.392554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.782 [2024-12-05 09:58:18.392565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.782 [2024-12-05 09:58:18.392571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.392577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.782 [2024-12-05 09:58:18.392585] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:30.782 [2024-12-05 09:58:18.392594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:26:30.782 [2024-12-05 09:58:18.392620] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:26:30.782 [2024-12-05 09:58:18.392641] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:26:30.782 [2024-12-05 09:58:18.392665] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.392673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.782 [2024-12-05 09:58:18.392690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.782 [2024-12-05 09:58:18.392722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.782 [2024-12-05 09:58:18.392882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:30.782 [2024-12-05 09:58:18.392908] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:30.782 [2024-12-05 09:58:18.392916] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:30.782 [2024-12-05 09:58:18.392923] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:26:30.782 [2024-12-05 09:58:18.392945] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:26:30.783 [2024-12-05 09:58:18.392964] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.392979] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.392986] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.393000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.783 [2024-12-05 09:58:18.393009] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.783 [2024-12-05 09:58:18.393015] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.393021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.783 [2024-12-05 09:58:18.393040] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:26:30.783 [2024-12-05 09:58:18.393049] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:26:30.783 [2024-12-05 09:58:18.393070] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:26:30.783 [2024-12-05 09:58:18.393082] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:26:30.783 [2024-12-05 09:58:18.393092] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:26:30.783 [2024-12-05 09:58:18.393102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:26:30.783 [2024-12-05 09:58:18.393119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:26:30.783 [2024-12-05 09:58:18.393135] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.393142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.393149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.783 [2024-12-05 09:58:18.393162] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:30.783 [2024-12-05 09:58:18.393209] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.783 [2024-12-05 09:58:18.393302] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.783 [2024-12-05 09:58:18.393313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.783 [2024-12-05 09:58:18.393320] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.393327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.783 [2024-12-05 09:58:18.393347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.393355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.393366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:26:30.783 [2024-12-05 09:58:18.393381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.783 [2024-12-05 09:58:18.393397] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.393404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.393409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:26:30.783 [2024-12-05 09:58:18.393419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.783 [2024-12-05 09:58:18.393427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.393433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.393439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:26:30.783 [2024-12-05 09:58:18.393448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.783 [2024-12-05 09:58:18.393456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.393470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.393476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:26:30.783 [2024-12-05 09:58:18.393485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.783 [2024-12-05 09:58:18.393496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:30.783 [2024-12-05 09:58:18.393514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:30.783 [2024-12-05 09:58:18.393525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.393532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:26:30.783 [2024-12-05 09:58:18.393543] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-05 09:58:18.393577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:30.783 [2024-12-05 09:58:18.393587] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:26:30.783 [2024-12-05 09:58:18.393594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:26:30.783 [2024-12-05 09:58:18.393601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:26:30.783 [2024-12-05 09:58:18.393608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:26:30.783 [2024-12-05 09:58:18.393755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.783 [2024-12-05 09:58:18.393766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.783 [2024-12-05 09:58:18.393772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.393782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:26:30.783 [2024-12-05 09:58:18.393792] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:26:30.783 [2024-12-05 09:58:18.393802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:30.783 [2024-12-05 09:58:18.393818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:26:30.783 [2024-12-05 09:58:18.393828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:30.783 [2024-12-05 09:58:18.393838] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.393846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.393852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:26:30.783 [2024-12-05 09:58:18.393865] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:30.783 [2024-12-05 09:58:18.393896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:26:30.783 [2024-12-05 09:58:18.394020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.783 [2024-12-05 09:58:18.394034] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.783 [2024-12-05 09:58:18.394040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.394047] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:26:30.783 [2024-12-05 09:58:18.394137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:26:30.783 [2024-12-05 09:58:18.394202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:30.783 [2024-12-05 09:58:18.394224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.394233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:26:30.783 [2024-12-05 09:58:18.394250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-05 09:58:18.394282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:26:30.783 [2024-12-05 09:58:18.394407] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:30.783 [2024-12-05 09:58:18.394419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:30.783 [2024-12-05 09:58:18.394425] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.394431] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:26:30.783 [2024-12-05 09:58:18.394439] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:26:30.783 [2024-12-05 09:58:18.394445] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.394463] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.394471] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.394497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.783 [2024-12-05 09:58:18.394509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.783 [2024-12-05 09:58:18.394516] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.394522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:26:30.783 [2024-12-05 09:58:18.394556] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:26:30.783 [2024-12-05 09:58:18.394587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:26:30.783 [2024-12-05 09:58:18.394611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:26:30.783 [2024-12-05 09:58:18.394627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.394635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:26:30.783 [2024-12-05 09:58:18.394651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.783 [2024-12-05 09:58:18.394682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:26:30.783 [2024-12-05 09:58:18.394837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:30.783 [2024-12-05 09:58:18.394859] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:30.783 [2024-12-05 09:58:18.394866] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.394872] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:26:30.783 [2024-12-05 09:58:18.394879] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:26:30.783 [2024-12-05 09:58:18.394885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.783 [2024-12-05 09:58:18.394896] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.394903] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.394914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.784 [2024-12-05 09:58:18.398977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.784 [2024-12-05 09:58:18.399001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.399010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:26:30.784 [2024-12-05 09:58:18.399053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:30.784 [2024-12-05 09:58:18.399080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:30.784 [2024-12-05 09:58:18.399117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.399125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:26:30.784 [2024-12-05 09:58:18.399140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.784 [2024-12-05 09:58:18.399175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:26:30.784 [2024-12-05 09:58:18.399293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:30.784 [2024-12-05 09:58:18.399314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:30.784 [2024-12-05 09:58:18.399321] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.399327] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:26:30.784 [2024-12-05 09:58:18.399334] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:26:30.784 [2024-12-05 09:58:18.399351] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.399364] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.399371] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.399400] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.784 [2024-12-05 09:58:18.399410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.784 [2024-12-05 09:58:18.399415] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.399421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:26:30.784 [2024-12-05 09:58:18.399455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:30.784 [2024-12-05 09:58:18.399470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:26:30.784 [2024-12-05 09:58:18.399486] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:26:30.784 [2024-12-05 09:58:18.399496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:26:30.784 [2024-12-05 09:58:18.399514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:30.784 [2024-12-05 09:58:18.399528] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:26:30.784 [2024-12-05 09:58:18.399537] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:26:30.784 [2024-12-05 09:58:18.399559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:26:30.784 [2024-12-05 09:58:18.399571] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:26:30.784 [2024-12-05 09:58:18.399607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.399616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:26:30.784 [2024-12-05 09:58:18.399645] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.784 [2024-12-05 09:58:18.399656] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.399663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.399669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:26:30.784 [2024-12-05 09:58:18.399679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.784 [2024-12-05 09:58:18.399722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:26:30.784 [2024-12-05 09:58:18.399734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:26:30.784 [2024-12-05 09:58:18.399855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.784 [2024-12-05 09:58:18.399867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.784 [2024-12-05 09:58:18.399877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.399884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:26:30.784 [2024-12-05 09:58:18.399896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.784 [2024-12-05 09:58:18.399907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.784 [2024-12-05 09:58:18.399913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.399919] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:26:30.784 [2024-12-05 09:58:18.399934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.399958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:26:30.784 [2024-12-05 09:58:18.399970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.784 [2024-12-05 09:58:18.400017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:26:30.784 [2024-12-05 09:58:18.400119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.784 [2024-12-05 09:58:18.400130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.784 [2024-12-05 09:58:18.400136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.400146] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:26:30.784 [2024-12-05 09:58:18.400163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.400170] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:26:30.784 [2024-12-05 09:58:18.400182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.784 [2024-12-05 09:58:18.400208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:26:30.784 [2024-12-05 09:58:18.400303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.784 [2024-12-05 09:58:18.400314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.784 [2024-12-05 09:58:18.400320] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.400327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:26:30.784 [2024-12-05 09:58:18.400356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.400364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:26:30.784 [2024-12-05 09:58:18.400385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.784 [2024-12-05 09:58:18.400418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:26:30.784 [2024-12-05 09:58:18.400511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.784 [2024-12-05 09:58:18.400525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.784 [2024-12-05 09:58:18.400531] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.400537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:26:30.784 [2024-12-05 09:58:18.400567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.400578] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:26:30.784 [2024-12-05 09:58:18.400591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.784 [2024-12-05 09:58:18.400604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.400611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:26:30.784 [2024-12-05 09:58:18.400622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.784 [2024-12-05 09:58:18.400636] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.400644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:26:30.784 [2024-12-05 09:58:18.400655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.784 [2024-12-05 09:58:18.400675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.400683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:26:30.784 [2024-12-05 09:58:18.400693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.784 [2024-12-05 09:58:18.400721] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:26:30.784 [2024-12-05 09:58:18.400733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:26:30.784 [2024-12-05 09:58:18.400740] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:26:30.784 [2024-12-05 09:58:18.400746] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:26:30.784 [2024-12-05 09:58:18.400952] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:30.784 [2024-12-05 09:58:18.400964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:30.784 [2024-12-05 09:58:18.400985] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.400995] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:26:30.784 [2024-12-05 09:58:18.401002] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:26:30.784 [2024-12-05 09:58:18.401010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.401046] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:30.784 [2024-12-05 09:58:18.401055] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:30.785 [2024-12-05 09:58:18.401067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:30.785 [2024-12-05 09:58:18.401076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:30.785 [2024-12-05 09:58:18.401081] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:30.785 [2024-12-05 09:58:18.401087] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:26:30.785 [2024-12-05 09:58:18.401094] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:26:30.785 [2024-12-05 09:58:18.401100] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.785 [2024-12-05 09:58:18.401109] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:30.785 [2024-12-05 09:58:18.401115] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:30.785 [2024-12-05 09:58:18.401123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:30.785 [2024-12-05 09:58:18.401134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:30.785 [2024-12-05 09:58:18.401140] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:30.785 [2024-12-05 09:58:18.401146] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:26:30.785 [2024-12-05 09:58:18.401152] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:26:30.785 [2024-12-05 09:58:18.401158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.785 [2024-12-05 09:58:18.401170] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:30.785 [2024-12-05 09:58:18.401176] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:30.785 [2024-12-05 09:58:18.401184] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:30.785 [2024-12-05 09:58:18.401192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:30.785 [2024-12-05 09:58:18.401197] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:30.785 [2024-12-05 09:58:18.401203] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:26:30.785 [2024-12-05 09:58:18.401215] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:26:30.785 [2024-12-05 09:58:18.401221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.785 [2024-12-05 09:58:18.401231] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:30.785 [2024-12-05 09:58:18.401237] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:30.785 [2024-12-05 09:58:18.401248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.785 [2024-12-05 09:58:18.401257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.785 [2024-12-05 09:58:18.401262] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.785 [2024-12-05 09:58:18.401269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:26:30.785 [2024-12-05 09:58:18.401298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.785 [2024-12-05 09:58:18.401310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.785 [2024-12-05 09:58:18.401319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.785 [2024-12-05 09:58:18.401325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:26:30.785 [2024-12-05 09:58:18.401347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.785 [2024-12-05 09:58:18.401357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.785 [2024-12-05 09:58:18.401362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.785 [2024-12-05 09:58:18.401368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:26:30.785 [2024-12-05 09:58:18.401379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.785 [2024-12-05 09:58:18.401387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.785 [2024-12-05 09:58:18.401393] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.785 [2024-12-05 09:58:18.401398] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:26:30.785 ===================================================== 00:26:30.785 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:30.785 ===================================================== 00:26:30.785 Controller Capabilities/Features 00:26:30.785 ================================ 00:26:30.785 Vendor ID: 8086 00:26:30.785 Subsystem Vendor ID: 8086 00:26:30.785 Serial Number: SPDK00000000000001 00:26:30.785 Model Number: SPDK bdev Controller 00:26:30.785 Firmware Version: 25.01 00:26:30.785 Recommended Arb Burst: 6 00:26:30.785 IEEE OUI Identifier: e4 d2 5c 00:26:30.785 Multi-path I/O 00:26:30.785 May have multiple subsystem ports: Yes 00:26:30.785 May have multiple controllers: Yes 00:26:30.785 Associated with SR-IOV VF: No 00:26:30.785 Max Data Transfer Size: 131072 00:26:30.785 Max Number of Namespaces: 32 00:26:30.785 Max Number of I/O Queues: 127 00:26:30.785 NVMe Specification Version (VS): 1.3 00:26:30.785 NVMe Specification Version (Identify): 1.3 00:26:30.785 Maximum Queue Entries: 128 00:26:30.785 Contiguous Queues Required: Yes 00:26:30.785 Arbitration Mechanisms Supported 00:26:30.785 Weighted Round Robin: Not Supported 00:26:30.785 Vendor Specific: Not Supported 00:26:30.785 Reset Timeout: 15000 ms 00:26:30.785 Doorbell Stride: 4 bytes 00:26:30.785 NVM Subsystem Reset: Not Supported 00:26:30.785 Command Sets Supported 00:26:30.785 NVM Command Set: Supported 00:26:30.785 Boot Partition: Not Supported 00:26:30.785 Memory Page Size Minimum: 4096 bytes 00:26:30.785 Memory Page Size Maximum: 4096 bytes 00:26:30.785 Persistent Memory Region: Not Supported 00:26:30.785 Optional Asynchronous Events Supported 00:26:30.785 Namespace Attribute Notices: Supported 00:26:30.785 Firmware Activation Notices: Not Supported 00:26:30.785 ANA Change Notices: Not Supported 00:26:30.785 PLE Aggregate Log Change Notices: Not Supported 00:26:30.785 LBA Status Info Alert Notices: Not Supported 00:26:30.785 EGE Aggregate Log Change Notices: Not Supported 00:26:30.785 Normal NVM Subsystem Shutdown event: Not Supported 00:26:30.785 Zone Descriptor Change Notices: Not Supported 00:26:30.785 Discovery Log Change Notices: Not Supported 00:26:30.785 Controller Attributes 00:26:30.785 128-bit Host Identifier: Supported 00:26:30.785 Non-Operational Permissive Mode: Not Supported 00:26:30.785 NVM Sets: Not Supported 00:26:30.785 Read Recovery Levels: Not Supported 00:26:30.785 Endurance Groups: Not Supported 00:26:30.785 Predictable Latency Mode: Not Supported 00:26:30.785 Traffic Based Keep ALive: Not Supported 00:26:30.785 Namespace Granularity: Not Supported 00:26:30.785 SQ Associations: Not Supported 00:26:30.785 UUID List: Not Supported 00:26:30.785 Multi-Domain Subsystem: Not Supported 00:26:30.785 Fixed Capacity Management: Not Supported 00:26:30.785 Variable Capacity Management: Not Supported 00:26:30.785 Delete Endurance Group: Not Supported 00:26:30.785 Delete NVM Set: Not Supported 00:26:30.785 Extended LBA Formats Supported: Not Supported 00:26:30.785 Flexible Data Placement Supported: Not Supported 00:26:30.785 00:26:30.785 Controller Memory Buffer Support 00:26:30.785 ================================ 00:26:30.785 Supported: No 00:26:30.785 00:26:30.785 Persistent Memory Region Support 00:26:30.785 ================================ 00:26:30.785 Supported: No 00:26:30.785 00:26:30.785 Admin Command Set Attributes 00:26:30.785 ============================ 00:26:30.785 Security Send/Receive: Not Supported 00:26:30.785 Format NVM: Not Supported 00:26:30.785 Firmware Activate/Download: Not Supported 00:26:30.785 Namespace Management: Not Supported 00:26:30.785 Device Self-Test: Not Supported 00:26:30.785 Directives: Not Supported 00:26:30.785 NVMe-MI: Not Supported 00:26:30.785 Virtualization Management: Not Supported 00:26:30.785 Doorbell Buffer Config: Not Supported 00:26:30.785 Get LBA Status Capability: Not Supported 00:26:30.785 Command & Feature Lockdown Capability: Not Supported 00:26:30.785 Abort Command Limit: 4 00:26:30.785 Async Event Request Limit: 4 00:26:30.785 Number of Firmware Slots: N/A 00:26:30.785 Firmware Slot 1 Read-Only: N/A 00:26:30.785 Firmware Activation Without Reset: N/A 00:26:30.785 Multiple Update Detection Support: N/A 00:26:30.785 Firmware Update Granularity: No Information Provided 00:26:30.785 Per-Namespace SMART Log: No 00:26:30.785 Asymmetric Namespace Access Log Page: Not Supported 00:26:30.785 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:30.785 Command Effects Log Page: Supported 00:26:30.785 Get Log Page Extended Data: Supported 00:26:30.785 Telemetry Log Pages: Not Supported 00:26:30.785 Persistent Event Log Pages: Not Supported 00:26:30.785 Supported Log Pages Log Page: May Support 00:26:30.785 Commands Supported & Effects Log Page: Not Supported 00:26:30.785 Feature Identifiers & Effects Log Page:May Support 00:26:30.785 NVMe-MI Commands & Effects Log Page: May Support 00:26:30.785 Data Area 4 for Telemetry Log: Not Supported 00:26:30.785 Error Log Page Entries Supported: 128 00:26:30.785 Keep Alive: Supported 00:26:30.785 Keep Alive Granularity: 10000 ms 00:26:30.785 00:26:30.785 NVM Command Set Attributes 00:26:30.785 ========================== 00:26:30.785 Submission Queue Entry Size 00:26:30.785 Max: 64 00:26:30.785 Min: 64 00:26:30.785 Completion Queue Entry Size 00:26:30.785 Max: 16 00:26:30.785 Min: 16 00:26:30.785 Number of Namespaces: 32 00:26:30.785 Compare Command: Supported 00:26:30.785 Write Uncorrectable Command: Not Supported 00:26:30.785 Dataset Management Command: Supported 00:26:30.785 Write Zeroes Command: Supported 00:26:30.785 Set Features Save Field: Not Supported 00:26:30.786 Reservations: Supported 00:26:30.786 Timestamp: Not Supported 00:26:30.786 Copy: Supported 00:26:30.786 Volatile Write Cache: Present 00:26:30.786 Atomic Write Unit (Normal): 1 00:26:30.786 Atomic Write Unit (PFail): 1 00:26:30.786 Atomic Compare & Write Unit: 1 00:26:30.786 Fused Compare & Write: Supported 00:26:30.786 Scatter-Gather List 00:26:30.786 SGL Command Set: Supported 00:26:30.786 SGL Keyed: Supported 00:26:30.786 SGL Bit Bucket Descriptor: Not Supported 00:26:30.786 SGL Metadata Pointer: Not Supported 00:26:30.786 Oversized SGL: Not Supported 00:26:30.786 SGL Metadata Address: Not Supported 00:26:30.786 SGL Offset: Supported 00:26:30.786 Transport SGL Data Block: Not Supported 00:26:30.786 Replay Protected Memory Block: Not Supported 00:26:30.786 00:26:30.786 Firmware Slot Information 00:26:30.786 ========================= 00:26:30.786 Active slot: 1 00:26:30.786 Slot 1 Firmware Revision: 25.01 00:26:30.786 00:26:30.786 00:26:30.786 Commands Supported and Effects 00:26:30.786 ============================== 00:26:30.786 Admin Commands 00:26:30.786 -------------- 00:26:30.786 Get Log Page (02h): Supported 00:26:30.786 Identify (06h): Supported 00:26:30.786 Abort (08h): Supported 00:26:30.786 Set Features (09h): Supported 00:26:30.786 Get Features (0Ah): Supported 00:26:30.786 Asynchronous Event Request (0Ch): Supported 00:26:30.786 Keep Alive (18h): Supported 00:26:30.786 I/O Commands 00:26:30.786 ------------ 00:26:30.786 Flush (00h): Supported LBA-Change 00:26:30.786 Write (01h): Supported LBA-Change 00:26:30.786 Read (02h): Supported 00:26:30.786 Compare (05h): Supported 00:26:30.786 Write Zeroes (08h): Supported LBA-Change 00:26:30.786 Dataset Management (09h): Supported LBA-Change 00:26:30.786 Copy (19h): Supported LBA-Change 00:26:30.786 00:26:30.786 Error Log 00:26:30.786 ========= 00:26:30.786 00:26:30.786 Arbitration 00:26:30.786 =========== 00:26:30.786 Arbitration Burst: 1 00:26:30.786 00:26:30.786 Power Management 00:26:30.786 ================ 00:26:30.786 Number of Power States: 1 00:26:30.786 Current Power State: Power State #0 00:26:30.786 Power State #0: 00:26:30.786 Max Power: 0.00 W 00:26:30.786 Non-Operational State: Operational 00:26:30.786 Entry Latency: Not Reported 00:26:30.786 Exit Latency: Not Reported 00:26:30.786 Relative Read Throughput: 0 00:26:30.786 Relative Read Latency: 0 00:26:30.786 Relative Write Throughput: 0 00:26:30.786 Relative Write Latency: 0 00:26:30.786 Idle Power: Not Reported 00:26:30.786 Active Power: Not Reported 00:26:30.786 Non-Operational Permissive Mode: Not Supported 00:26:30.786 00:26:30.786 Health Information 00:26:30.786 ================== 00:26:30.786 Critical Warnings: 00:26:30.786 Available Spare Space: OK 00:26:30.786 Temperature: OK 00:26:30.786 Device Reliability: OK 00:26:30.786 Read Only: No 00:26:30.786 Volatile Memory Backup: OK 00:26:30.786 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:30.786 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:30.786 Available Spare: 0% 00:26:30.786 Available Spare Threshold: 0% 00:26:30.786 Life Percentage Used:[2024-12-05 09:58:18.401568] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.786 [2024-12-05 09:58:18.401580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:26:30.786 [2024-12-05 09:58:18.401594] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.786 [2024-12-05 09:58:18.401627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:26:30.786 [2024-12-05 09:58:18.401729] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.786 [2024-12-05 09:58:18.401740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.786 [2024-12-05 09:58:18.401747] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.786 [2024-12-05 09:58:18.401753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:26:30.786 [2024-12-05 09:58:18.401857] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:26:30.786 [2024-12-05 09:58:18.401903] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:26:30.786 [2024-12-05 09:58:18.401921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.786 [2024-12-05 09:58:18.401945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:26:30.786 [2024-12-05 09:58:18.401956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.786 [2024-12-05 09:58:18.401963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:26:30.786 [2024-12-05 09:58:18.401970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.786 [2024-12-05 09:58:18.401978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:26:30.786 [2024-12-05 09:58:18.401985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.786 [2024-12-05 09:58:18.401999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.786 [2024-12-05 09:58:18.402007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.786 [2024-12-05 09:58:18.402014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:26:30.786 [2024-12-05 09:58:18.402027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.786 [2024-12-05 09:58:18.402063] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:26:30.786 [2024-12-05 09:58:18.402175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.786 [2024-12-05 09:58:18.402189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.786 [2024-12-05 09:58:18.402196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.786 [2024-12-05 09:58:18.402207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:26:30.786 [2024-12-05 09:58:18.402221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.786 [2024-12-05 09:58:18.402229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.786 [2024-12-05 09:58:18.402238] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:26:30.786 [2024-12-05 09:58:18.402252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.786 [2024-12-05 09:58:18.402291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:26:30.786 [2024-12-05 09:58:18.402430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.786 [2024-12-05 09:58:18.402446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.786 [2024-12-05 09:58:18.402453] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.786 [2024-12-05 09:58:18.402459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:26:30.786 [2024-12-05 09:58:18.402483] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:26:30.786 [2024-12-05 09:58:18.402503] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:26:30.786 [2024-12-05 09:58:18.402523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.786 [2024-12-05 09:58:18.402531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.786 [2024-12-05 09:58:18.402547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:26:30.786 [2024-12-05 09:58:18.402576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.787 [2024-12-05 09:58:18.402604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:26:30.787 [2024-12-05 09:58:18.402690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.787 [2024-12-05 09:58:18.402701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.787 [2024-12-05 09:58:18.402706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.787 [2024-12-05 09:58:18.402715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:26:30.787 [2024-12-05 09:58:18.402733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.787 [2024-12-05 09:58:18.402740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.787 [2024-12-05 09:58:18.402746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:26:30.787 [2024-12-05 09:58:18.402757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.787 [2024-12-05 09:58:18.402782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:26:30.787 [2024-12-05 09:58:18.402876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:30.787 [2024-12-05 09:58:18.402886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:30.787 [2024-12-05 09:58:18.402892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:30.787 [2024-12-05 09:58:18.402898] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:26:30.787 [2024-12-05 09:58:18.402913] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:30.787 [2024-12-05 09:58:18.402920] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:30.787 [2024-12-05 09:58:18.402926] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:26:30.787 [2024-12-05 09:58:18.402937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.046 [2024-12-05 09:58:18.407085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:26:31.046 [2024-12-05 09:58:18.407169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:31.046 [2024-12-05 09:58:18.407188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:31.046 [2024-12-05 09:58:18.407195] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:31.046 [2024-12-05 09:58:18.407201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:26:31.046 [2024-12-05 09:58:18.407219] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:26:31.046 0% 00:26:31.046 Data Units Read: 0 00:26:31.046 Data Units Written: 0 00:26:31.046 Host Read Commands: 0 00:26:31.046 Host Write Commands: 0 00:26:31.046 Controller Busy Time: 0 minutes 00:26:31.046 Power Cycles: 0 00:26:31.046 Power On Hours: 0 hours 00:26:31.046 Unsafe Shutdowns: 0 00:26:31.046 Unrecoverable Media Errors: 0 00:26:31.046 Lifetime Error Log Entries: 0 00:26:31.046 Warning Temperature Time: 0 minutes 00:26:31.046 Critical Temperature Time: 0 minutes 00:26:31.046 00:26:31.046 Number of Queues 00:26:31.046 ================ 00:26:31.046 Number of I/O Submission Queues: 127 00:26:31.046 Number of I/O Completion Queues: 127 00:26:31.046 00:26:31.046 Active Namespaces 00:26:31.046 ================= 00:26:31.046 Namespace ID:1 00:26:31.046 Error Recovery Timeout: Unlimited 00:26:31.046 Command Set Identifier: NVM (00h) 00:26:31.046 Deallocate: Supported 00:26:31.046 Deallocated/Unwritten Error: Not Supported 00:26:31.046 Deallocated Read Value: Unknown 00:26:31.046 Deallocate in Write Zeroes: Not Supported 00:26:31.046 Deallocated Guard Field: 0xFFFF 00:26:31.046 Flush: Supported 00:26:31.046 Reservation: Supported 00:26:31.046 Namespace Sharing Capabilities: Multiple Controllers 00:26:31.046 Size (in LBAs): 131072 (0GiB) 00:26:31.046 Capacity (in LBAs): 131072 (0GiB) 00:26:31.046 Utilization (in LBAs): 131072 (0GiB) 00:26:31.046 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:31.046 EUI64: ABCDEF0123456789 00:26:31.046 UUID: a73e543b-cc39-47c5-add4-984707962d94 00:26:31.046 Thin Provisioning: Not Supported 00:26:31.046 Per-NS Atomic Units: Yes 00:26:31.046 Atomic Boundary Size (Normal): 0 00:26:31.046 Atomic Boundary Size (PFail): 0 00:26:31.046 Atomic Boundary Offset: 0 00:26:31.046 Maximum Single Source Range Length: 65535 00:26:31.046 Maximum Copy Length: 65535 00:26:31.046 Maximum Source Range Count: 1 00:26:31.046 NGUID/EUI64 Never Reused: No 00:26:31.046 Namespace Write Protected: No 00:26:31.046 Number of LBA Formats: 1 00:26:31.046 Current LBA Format: LBA Format #00 00:26:31.046 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:31.046 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:31.046 rmmod nvme_tcp 00:26:31.046 rmmod nvme_fabrics 00:26:31.046 rmmod nvme_keyring 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 96524 ']' 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 96524 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 96524 ']' 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 96524 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96524 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:31.046 killing process with pid 96524 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96524' 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 96524 00:26:31.046 09:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 96524 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:26:32.424 00:26:32.424 real 0m4.207s 00:26:32.424 user 0m11.269s 00:26:32.424 sys 0m1.089s 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:32.424 ************************************ 00:26:32.424 END TEST nvmf_identify 00:26:32.424 ************************************ 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:32.424 09:58:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.424 ************************************ 00:26:32.424 START TEST nvmf_perf 00:26:32.424 ************************************ 00:26:32.424 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:32.684 * Looking for test storage... 00:26:32.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:32.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.684 --rc genhtml_branch_coverage=1 00:26:32.684 --rc genhtml_function_coverage=1 00:26:32.684 --rc genhtml_legend=1 00:26:32.684 --rc geninfo_all_blocks=1 00:26:32.684 --rc geninfo_unexecuted_blocks=1 00:26:32.684 00:26:32.684 ' 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:32.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.684 --rc genhtml_branch_coverage=1 00:26:32.684 --rc genhtml_function_coverage=1 00:26:32.684 --rc genhtml_legend=1 00:26:32.684 --rc geninfo_all_blocks=1 00:26:32.684 --rc geninfo_unexecuted_blocks=1 00:26:32.684 00:26:32.684 ' 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:32.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.684 --rc genhtml_branch_coverage=1 00:26:32.684 --rc genhtml_function_coverage=1 00:26:32.684 --rc genhtml_legend=1 00:26:32.684 --rc geninfo_all_blocks=1 00:26:32.684 --rc geninfo_unexecuted_blocks=1 00:26:32.684 00:26:32.684 ' 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:32.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.684 --rc genhtml_branch_coverage=1 00:26:32.684 --rc genhtml_function_coverage=1 00:26:32.684 --rc genhtml_legend=1 00:26:32.684 --rc geninfo_all_blocks=1 00:26:32.684 --rc geninfo_unexecuted_blocks=1 00:26:32.684 00:26:32.684 ' 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.684 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:32.685 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:32.685 Cannot find device "nvmf_init_br" 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:32.685 Cannot find device "nvmf_init_br2" 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:32.685 Cannot find device "nvmf_tgt_br" 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:32.685 Cannot find device "nvmf_tgt_br2" 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:32.685 Cannot find device "nvmf_init_br" 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:26:32.685 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:32.944 Cannot find device "nvmf_init_br2" 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:32.944 Cannot find device "nvmf_tgt_br" 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:32.944 Cannot find device "nvmf_tgt_br2" 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:32.944 Cannot find device "nvmf_br" 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:32.944 Cannot find device "nvmf_init_if" 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:32.944 Cannot find device "nvmf_init_if2" 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:32.944 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:32.944 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:32.944 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:32.945 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:32.945 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:32.945 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:32.945 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:32.945 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:32.945 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:32.945 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:32.945 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:32.945 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:33.204 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:33.204 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:26:33.204 00:26:33.204 --- 10.0.0.3 ping statistics --- 00:26:33.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.204 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:33.204 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:33.204 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:26:33.204 00:26:33.204 --- 10.0.0.4 ping statistics --- 00:26:33.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.204 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:33.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:26:33.204 00:26:33.204 --- 10.0.0.1 ping statistics --- 00:26:33.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.204 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:33.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:33.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:26:33.204 00:26:33.204 --- 10.0.0.2 ping statistics --- 00:26:33.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.204 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=96814 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 96814 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 96814 ']' 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:33.204 09:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:33.204 [2024-12-05 09:58:20.753482] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:33.204 [2024-12-05 09:58:20.753632] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.463 [2024-12-05 09:58:20.922176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:33.463 [2024-12-05 09:58:21.034110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.463 [2024-12-05 09:58:21.034180] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.463 [2024-12-05 09:58:21.034217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:33.463 [2024-12-05 09:58:21.034229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:33.463 [2024-12-05 09:58:21.034244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.463 [2024-12-05 09:58:21.036221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.463 [2024-12-05 09:58:21.036378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.463 [2024-12-05 09:58:21.037431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:33.463 [2024-12-05 09:58:21.037450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.399 09:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:34.399 09:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:26:34.399 09:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:34.399 09:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:34.399 09:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:34.399 09:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:34.399 09:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:34.399 09:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:26:34.658 09:58:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:26:34.658 09:58:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:35.226 09:58:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:26:35.226 09:58:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:35.484 09:58:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:35.485 09:58:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:26:35.485 09:58:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:35.485 09:58:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:35.485 09:58:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:35.773 [2024-12-05 09:58:23.122151] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.773 09:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:35.773 09:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:35.773 09:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:36.032 09:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:36.032 09:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:36.291 09:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:36.550 [2024-12-05 09:58:24.037412] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:36.550 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:36.809 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:36.809 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:26:36.809 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:36.809 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:26:38.239 Initializing NVMe Controllers 00:26:38.239 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:26:38.239 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:26:38.239 Initialization complete. Launching workers. 00:26:38.239 ======================================================== 00:26:38.239 Latency(us) 00:26:38.239 Device Information : IOPS MiB/s Average min max 00:26:38.239 PCIE (0000:00:10.0) NSID 1 from core 0: 19101.57 74.62 1675.64 390.67 8937.60 00:26:38.239 ======================================================== 00:26:38.239 Total : 19101.57 74.62 1675.64 390.67 8937.60 00:26:38.239 00:26:38.239 09:58:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:39.614 Initializing NVMe Controllers 00:26:39.614 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:39.614 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:39.614 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:39.614 Initialization complete. Launching workers. 00:26:39.614 ======================================================== 00:26:39.614 Latency(us) 00:26:39.614 Device Information : IOPS MiB/s Average min max 00:26:39.614 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2872.97 11.22 347.49 132.28 6313.88 00:26:39.614 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8103.41 4061.70 12039.17 00:26:39.614 ======================================================== 00:26:39.614 Total : 2996.96 11.71 668.39 132.28 12039.17 00:26:39.614 00:26:39.614 09:58:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:40.991 Initializing NVMe Controllers 00:26:40.991 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:40.991 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:40.991 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:40.991 Initialization complete. Launching workers. 00:26:40.991 ======================================================== 00:26:40.991 Latency(us) 00:26:40.991 Device Information : IOPS MiB/s Average min max 00:26:40.991 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8237.67 32.18 3885.53 771.31 9244.14 00:26:40.991 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2586.11 10.10 12483.19 7027.47 23793.96 00:26:40.991 ======================================================== 00:26:40.991 Total : 10823.78 42.28 5939.75 771.31 23793.96 00:26:40.991 00:26:40.991 09:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:26:40.991 09:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:44.282 Initializing NVMe Controllers 00:26:44.282 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:44.282 Controller IO queue size 128, less than required. 00:26:44.282 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:44.282 Controller IO queue size 128, less than required. 00:26:44.282 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:44.282 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:44.282 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:44.282 Initialization complete. Launching workers. 00:26:44.282 ======================================================== 00:26:44.282 Latency(us) 00:26:44.282 Device Information : IOPS MiB/s Average min max 00:26:44.282 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1289.33 322.33 103015.49 66483.02 246039.44 00:26:44.282 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 527.61 131.90 253285.28 105607.53 469115.14 00:26:44.282 ======================================================== 00:26:44.282 Total : 1816.94 454.24 146651.53 66483.02 469115.14 00:26:44.282 00:26:44.282 09:58:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:26:44.282 Initializing NVMe Controllers 00:26:44.282 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:44.282 Controller IO queue size 128, less than required. 00:26:44.282 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:44.282 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:44.282 Controller IO queue size 128, less than required. 00:26:44.282 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:44.282 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:26:44.282 WARNING: Some requested NVMe devices were skipped 00:26:44.282 No valid NVMe controllers or AIO or URING devices found 00:26:44.282 09:58:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:26:47.664 Initializing NVMe Controllers 00:26:47.664 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:47.664 Controller IO queue size 128, less than required. 00:26:47.664 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.664 Controller IO queue size 128, less than required. 00:26:47.664 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.664 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:47.664 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:47.664 Initialization complete. Launching workers. 00:26:47.664 00:26:47.664 ==================== 00:26:47.664 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:47.664 TCP transport: 00:26:47.664 polls: 8013 00:26:47.664 idle_polls: 5380 00:26:47.664 sock_completions: 2633 00:26:47.664 nvme_completions: 3867 00:26:47.664 submitted_requests: 5802 00:26:47.664 queued_requests: 1 00:26:47.664 00:26:47.664 ==================== 00:26:47.664 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:47.664 TCP transport: 00:26:47.664 polls: 9267 00:26:47.664 idle_polls: 6588 00:26:47.664 sock_completions: 2679 00:26:47.664 nvme_completions: 5207 00:26:47.664 submitted_requests: 7750 00:26:47.664 queued_requests: 1 00:26:47.664 ======================================================== 00:26:47.664 Latency(us) 00:26:47.664 Device Information : IOPS MiB/s Average min max 00:26:47.664 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 966.46 241.61 142712.33 90780.02 349919.97 00:26:47.664 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1301.44 325.36 97708.53 52219.75 244418.03 00:26:47.664 ======================================================== 00:26:47.664 Total : 2267.90 566.98 116886.73 52219.75 349919.97 00:26:47.664 00:26:47.664 09:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:47.664 09:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:47.664 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:26:47.664 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:26:47.664 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:26:47.664 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=df6ca82d-7f11-4dd9-a40c-50f04315583f 00:26:47.664 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb df6ca82d-7f11-4dd9-a40c-50f04315583f 00:26:47.664 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=df6ca82d-7f11-4dd9-a40c-50f04315583f 00:26:47.664 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:26:47.664 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:26:47.664 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:26:47.664 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:47.921 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:26:47.921 { 00:26:47.921 "base_bdev": "Nvme0n1", 00:26:47.921 "block_size": 4096, 00:26:47.921 "cluster_size": 4194304, 00:26:47.921 "free_clusters": 1278, 00:26:47.921 "name": "lvs_0", 00:26:47.921 "total_data_clusters": 1278, 00:26:47.921 "uuid": "df6ca82d-7f11-4dd9-a40c-50f04315583f" 00:26:47.921 } 00:26:47.921 ]' 00:26:47.921 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="df6ca82d-7f11-4dd9-a40c-50f04315583f") .free_clusters' 00:26:47.921 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1278 00:26:47.921 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="df6ca82d-7f11-4dd9-a40c-50f04315583f") .cluster_size' 00:26:48.179 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:26:48.179 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5112 00:26:48.179 5112 00:26:48.179 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5112 00:26:48.179 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:26:48.179 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u df6ca82d-7f11-4dd9-a40c-50f04315583f lbd_0 5112 00:26:48.437 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=66dfe9d9-6b90-45f8-8363-529339a7bd76 00:26:48.437 09:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 66dfe9d9-6b90-45f8-8363-529339a7bd76 lvs_n_0 00:26:48.696 09:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=2fcf7168-896a-4794-a5f3-998f814ca079 00:26:48.696 09:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 2fcf7168-896a-4794-a5f3-998f814ca079 00:26:48.696 09:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=2fcf7168-896a-4794-a5f3-998f814ca079 00:26:48.696 09:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:26:48.696 09:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:26:48.696 09:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:26:48.696 09:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:48.954 09:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:26:48.954 { 00:26:48.954 "base_bdev": "Nvme0n1", 00:26:48.954 "block_size": 4096, 00:26:48.954 "cluster_size": 4194304, 00:26:48.954 "free_clusters": 0, 00:26:48.954 "name": "lvs_0", 00:26:48.954 "total_data_clusters": 1278, 00:26:48.954 "uuid": "df6ca82d-7f11-4dd9-a40c-50f04315583f" 00:26:48.954 }, 00:26:48.954 { 00:26:48.954 "base_bdev": "66dfe9d9-6b90-45f8-8363-529339a7bd76", 00:26:48.954 "block_size": 4096, 00:26:48.954 "cluster_size": 4194304, 00:26:48.954 "free_clusters": 1276, 00:26:48.954 "name": "lvs_n_0", 00:26:48.954 "total_data_clusters": 1276, 00:26:48.954 "uuid": "2fcf7168-896a-4794-a5f3-998f814ca079" 00:26:48.954 } 00:26:48.954 ]' 00:26:48.954 09:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="2fcf7168-896a-4794-a5f3-998f814ca079") .free_clusters' 00:26:48.954 09:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1276 00:26:48.954 09:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="2fcf7168-896a-4794-a5f3-998f814ca079") .cluster_size' 00:26:48.954 5104 00:26:48.954 09:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:26:48.954 09:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5104 00:26:48.954 09:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5104 00:26:48.954 09:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:26:48.954 09:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2fcf7168-896a-4794-a5f3-998f814ca079 lbd_nest_0 5104 00:26:49.213 09:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=c08ad408-1684-43e0-890a-b47b89175f48 00:26:49.213 09:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:49.780 09:58:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:26:49.780 09:58:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 c08ad408-1684-43e0-890a-b47b89175f48 00:26:49.780 09:58:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:50.039 09:58:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:26:50.039 09:58:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:26:50.039 09:58:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:50.039 09:58:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:50.039 09:58:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:50.607 Initializing NVMe Controllers 00:26:50.607 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:50.607 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:26:50.607 WARNING: Some requested NVMe devices were skipped 00:26:50.607 No valid NVMe controllers or AIO or URING devices found 00:26:50.607 09:58:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:50.607 09:58:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:27:02.815 Initializing NVMe Controllers 00:27:02.815 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:27:02.815 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:02.815 Initialization complete. Launching workers. 00:27:02.815 ======================================================== 00:27:02.815 Latency(us) 00:27:02.815 Device Information : IOPS MiB/s Average min max 00:27:02.815 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 748.40 93.55 1335.72 448.76 8560.00 00:27:02.815 ======================================================== 00:27:02.815 Total : 748.40 93.55 1335.72 448.76 8560.00 00:27:02.815 00:27:02.815 09:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:02.815 09:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:02.815 09:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:27:02.815 Initializing NVMe Controllers 00:27:02.815 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:27:02.815 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:27:02.815 WARNING: Some requested NVMe devices were skipped 00:27:02.815 No valid NVMe controllers or AIO or URING devices found 00:27:02.815 09:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:02.815 09:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:27:12.785 Initializing NVMe Controllers 00:27:12.785 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:27:12.785 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:12.785 Initialization complete. Launching workers. 00:27:12.785 ======================================================== 00:27:12.785 Latency(us) 00:27:12.785 Device Information : IOPS MiB/s Average min max 00:27:12.785 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1169.03 146.13 27380.79 7991.89 231578.54 00:27:12.785 ======================================================== 00:27:12.785 Total : 1169.03 146.13 27380.79 7991.89 231578.54 00:27:12.785 00:27:12.785 09:58:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:12.785 09:58:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:12.785 09:58:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:27:12.785 Initializing NVMe Controllers 00:27:12.785 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:27:12.785 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:27:12.785 WARNING: Some requested NVMe devices were skipped 00:27:12.785 No valid NVMe controllers or AIO or URING devices found 00:27:12.785 09:58:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:12.785 09:58:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:27:22.760 Initializing NVMe Controllers 00:27:22.760 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:27:22.760 Controller IO queue size 128, less than required. 00:27:22.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:22.760 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:22.760 Initialization complete. Launching workers. 00:27:22.760 ======================================================== 00:27:22.760 Latency(us) 00:27:22.760 Device Information : IOPS MiB/s Average min max 00:27:22.760 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3376.24 422.03 37925.03 13379.55 76589.25 00:27:22.760 ======================================================== 00:27:22.760 Total : 3376.24 422.03 37925.03 13379.55 76589.25 00:27:22.760 00:27:22.760 09:59:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:23.018 09:59:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c08ad408-1684-43e0-890a-b47b89175f48 00:27:23.276 09:59:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:23.533 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 66dfe9d9-6b90-45f8-8363-529339a7bd76 00:27:23.791 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:24.048 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:24.049 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:27:24.049 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:24.049 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:27:24.049 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:24.049 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:27:24.049 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:24.049 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:24.049 rmmod nvme_tcp 00:27:24.049 rmmod nvme_fabrics 00:27:24.049 rmmod nvme_keyring 00:27:24.306 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:24.306 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:27:24.306 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:27:24.306 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 96814 ']' 00:27:24.306 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 96814 00:27:24.306 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 96814 ']' 00:27:24.306 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 96814 00:27:24.306 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:27:24.306 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:24.306 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96814 00:27:24.306 killing process with pid 96814 00:27:24.306 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:24.306 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:24.306 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96814' 00:27:24.306 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 96814 00:27:24.306 09:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 96814 00:27:26.840 09:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:26.840 09:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:26.840 09:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:26.840 09:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:27:26.840 09:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:27:26.840 09:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:26.840 09:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:27:26.840 09:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:26.840 09:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:26.840 09:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:26.840 09:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:26.840 09:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:26.840 09:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:26.840 09:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:26.840 09:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:26.840 09:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:26.840 09:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:26.840 09:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:26.840 09:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:26.840 09:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:26.840 09:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:26.840 09:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:26.840 09:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:26.840 09:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.840 09:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:26.840 09:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.840 09:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:27:26.840 ************************************ 00:27:26.840 END TEST nvmf_perf 00:27:26.840 ************************************ 00:27:26.840 00:27:26.840 real 0m54.153s 00:27:26.840 user 3m25.075s 00:27:26.840 sys 0m10.655s 00:27:26.840 09:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:26.840 09:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:26.840 09:59:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:26.840 09:59:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:26.840 09:59:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:26.840 09:59:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.840 ************************************ 00:27:26.840 START TEST nvmf_fio_host 00:27:26.840 ************************************ 00:27:26.840 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:26.840 * Looking for test storage... 00:27:26.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:26.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.841 --rc genhtml_branch_coverage=1 00:27:26.841 --rc genhtml_function_coverage=1 00:27:26.841 --rc genhtml_legend=1 00:27:26.841 --rc geninfo_all_blocks=1 00:27:26.841 --rc geninfo_unexecuted_blocks=1 00:27:26.841 00:27:26.841 ' 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:26.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.841 --rc genhtml_branch_coverage=1 00:27:26.841 --rc genhtml_function_coverage=1 00:27:26.841 --rc genhtml_legend=1 00:27:26.841 --rc geninfo_all_blocks=1 00:27:26.841 --rc geninfo_unexecuted_blocks=1 00:27:26.841 00:27:26.841 ' 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:26.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.841 --rc genhtml_branch_coverage=1 00:27:26.841 --rc genhtml_function_coverage=1 00:27:26.841 --rc genhtml_legend=1 00:27:26.841 --rc geninfo_all_blocks=1 00:27:26.841 --rc geninfo_unexecuted_blocks=1 00:27:26.841 00:27:26.841 ' 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:26.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.841 --rc genhtml_branch_coverage=1 00:27:26.841 --rc genhtml_function_coverage=1 00:27:26.841 --rc genhtml_legend=1 00:27:26.841 --rc geninfo_all_blocks=1 00:27:26.841 --rc geninfo_unexecuted_blocks=1 00:27:26.841 00:27:26.841 ' 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.841 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:26.842 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:26.842 Cannot find device "nvmf_init_br" 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:27:26.842 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:26.842 Cannot find device "nvmf_init_br2" 00:27:27.101 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:27:27.101 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:27.101 Cannot find device "nvmf_tgt_br" 00:27:27.101 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:27:27.101 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:27.101 Cannot find device "nvmf_tgt_br2" 00:27:27.101 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:27:27.101 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:27.101 Cannot find device "nvmf_init_br" 00:27:27.101 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:27:27.101 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:27.102 Cannot find device "nvmf_init_br2" 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:27.102 Cannot find device "nvmf_tgt_br" 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:27.102 Cannot find device "nvmf_tgt_br2" 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:27.102 Cannot find device "nvmf_br" 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:27.102 Cannot find device "nvmf_init_if" 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:27.102 Cannot find device "nvmf_init_if2" 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:27.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:27.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:27.102 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:27.362 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:27.362 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:27:27.362 00:27:27.362 --- 10.0.0.3 ping statistics --- 00:27:27.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.362 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:27.362 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:27.362 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms 00:27:27.362 00:27:27.362 --- 10.0.0.4 ping statistics --- 00:27:27.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.362 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:27.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:27.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:27:27.362 00:27:27.362 --- 10.0.0.1 ping statistics --- 00:27:27.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.362 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:27.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:27.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:27:27.362 00:27:27.362 --- 10.0.0.2 ping statistics --- 00:27:27.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.362 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=97859 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 97859 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 97859 ']' 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:27.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:27.362 09:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.362 [2024-12-05 09:59:14.913114] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:27:27.362 [2024-12-05 09:59:14.913256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:27.621 [2024-12-05 09:59:15.081893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:27.621 [2024-12-05 09:59:15.200838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:27.621 [2024-12-05 09:59:15.200914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:27.621 [2024-12-05 09:59:15.200981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:27.621 [2024-12-05 09:59:15.200997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:27.621 [2024-12-05 09:59:15.201013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:27.621 [2024-12-05 09:59:15.203142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.621 [2024-12-05 09:59:15.203277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:27.621 [2024-12-05 09:59:15.203498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.621 [2024-12-05 09:59:15.204193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:28.555 09:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:28.555 09:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:27:28.555 09:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:28.812 [2024-12-05 09:59:16.199621] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.812 09:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:28.812 09:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:28.812 09:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.812 09:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:29.071 Malloc1 00:27:29.071 09:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:29.330 09:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:29.589 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:29.847 [2024-12-05 09:59:17.242747] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:29.847 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:27:30.105 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:27:30.105 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:27:30.105 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:27:30.105 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:30.106 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:30.106 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:30.106 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:30.106 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:30.106 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:30.106 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:30.106 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:30.106 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:30.106 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:30.106 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:30.106 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:30.106 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:27:30.106 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:27:30.106 09:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:27:30.106 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:30.106 fio-3.35 00:27:30.106 Starting 1 thread 00:27:32.634 00:27:32.634 test: (groupid=0, jobs=1): err= 0: pid=97978: Thu Dec 5 09:59:20 2024 00:27:32.634 read: IOPS=8215, BW=32.1MiB/s (33.7MB/s)(64.4MiB/2008msec) 00:27:32.634 slat (usec): min=2, max=507, avg= 2.97, stdev= 5.30 00:27:32.634 clat (usec): min=3925, max=16861, avg=8131.97, stdev=734.04 00:27:32.634 lat (usec): min=3977, max=16864, avg=8134.94, stdev=733.74 00:27:32.634 clat percentiles (usec): 00:27:32.634 | 1.00th=[ 6849], 5.00th=[ 7177], 10.00th=[ 7373], 20.00th=[ 7635], 00:27:32.634 | 30.00th=[ 7767], 40.00th=[ 7963], 50.00th=[ 8094], 60.00th=[ 8225], 00:27:32.634 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 8979], 95.00th=[ 9241], 00:27:32.634 | 99.00th=[10159], 99.50th=[10814], 99.90th=[15795], 99.95th=[16319], 00:27:32.634 | 99.99th=[16909] 00:27:32.634 bw ( KiB/s): min=31136, max=34112, per=99.98%, avg=32856.00, stdev=1390.30, samples=4 00:27:32.634 iops : min= 7784, max= 8528, avg=8214.00, stdev=347.57, samples=4 00:27:32.634 write: IOPS=8219, BW=32.1MiB/s (33.7MB/s)(64.5MiB/2008msec); 0 zone resets 00:27:32.634 slat (usec): min=2, max=372, avg= 3.21, stdev= 3.92 00:27:32.634 clat (usec): min=3193, max=16419, avg=7343.69, stdev=653.83 00:27:32.634 lat (usec): min=3212, max=16422, avg=7346.90, stdev=653.68 00:27:32.634 clat percentiles (usec): 00:27:32.634 | 1.00th=[ 6194], 5.00th=[ 6521], 10.00th=[ 6718], 20.00th=[ 6915], 00:27:32.634 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7439], 00:27:32.634 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8225], 00:27:32.634 | 99.00th=[ 8848], 99.50th=[ 9634], 99.90th=[14091], 99.95th=[15664], 00:27:32.634 | 99.99th=[16319] 00:27:32.634 bw ( KiB/s): min=31944, max=33504, per=100.00%, avg=32892.00, stdev=740.12, samples=4 00:27:32.634 iops : min= 7986, max= 8376, avg=8223.00, stdev=185.03, samples=4 00:27:32.634 lat (msec) : 4=0.06%, 10=99.15%, 20=0.79% 00:27:32.634 cpu : usr=65.77%, sys=24.86%, ctx=32, majf=0, minf=1553 00:27:32.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:32.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:32.634 issued rwts: total=16497,16504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:32.634 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:32.634 00:27:32.635 Run status group 0 (all jobs): 00:27:32.635 READ: bw=32.1MiB/s (33.7MB/s), 32.1MiB/s-32.1MiB/s (33.7MB/s-33.7MB/s), io=64.4MiB (67.6MB), run=2008-2008msec 00:27:32.635 WRITE: bw=32.1MiB/s (33.7MB/s), 32.1MiB/s-32.1MiB/s (33.7MB/s-33.7MB/s), io=64.5MiB (67.6MB), run=2008-2008msec 00:27:32.893 ----------------------------------------------------- 00:27:32.893 Suppressions used: 00:27:32.893 count bytes template 00:27:32.893 1 57 /usr/src/fio/parse.c 00:27:32.893 1 8 libtcmalloc_minimal.so 00:27:32.893 ----------------------------------------------------- 00:27:32.893 00:27:32.893 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:27:32.893 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:27:32.893 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:32.893 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:32.893 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:32.893 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:32.893 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:32.893 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:32.893 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:32.893 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:32.893 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:32.893 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:32.893 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:32.893 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:32.893 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:27:32.893 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:27:32.893 09:59:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:27:32.893 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:32.893 fio-3.35 00:27:32.893 Starting 1 thread 00:27:35.427 00:27:35.427 test: (groupid=0, jobs=1): err= 0: pid=98021: Thu Dec 5 09:59:22 2024 00:27:35.427 read: IOPS=7379, BW=115MiB/s (121MB/s)(231MiB/2007msec) 00:27:35.427 slat (usec): min=3, max=114, avg= 3.75, stdev= 2.49 00:27:35.427 clat (usec): min=3138, max=20490, avg=9964.58, stdev=2150.45 00:27:35.427 lat (usec): min=3141, max=20493, avg=9968.34, stdev=2150.51 00:27:35.427 clat percentiles (usec): 00:27:35.427 | 1.00th=[ 5604], 5.00th=[ 6521], 10.00th=[ 7308], 20.00th=[ 8160], 00:27:35.427 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[10421], 00:27:35.427 | 70.00th=[10945], 80.00th=[11600], 90.00th=[12780], 95.00th=[13698], 00:27:35.427 | 99.00th=[15795], 99.50th=[16450], 99.90th=[17957], 99.95th=[18220], 00:27:35.427 | 99.99th=[19530] 00:27:35.427 bw ( KiB/s): min=53312, max=68704, per=51.11%, avg=60344.00, stdev=7377.43, samples=4 00:27:35.427 iops : min= 3332, max= 4294, avg=3771.50, stdev=461.09, samples=4 00:27:35.427 write: IOPS=4335, BW=67.7MiB/s (71.0MB/s)(123MiB/1821msec); 0 zone resets 00:27:35.427 slat (usec): min=30, max=238, avg=33.32, stdev= 7.38 00:27:35.427 clat (usec): min=5753, max=26301, avg=13042.01, stdev=2345.62 00:27:35.427 lat (usec): min=5784, max=26332, avg=13075.32, stdev=2345.52 00:27:35.427 clat percentiles (usec): 00:27:35.427 | 1.00th=[ 8586], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[11076], 00:27:35.427 | 30.00th=[11600], 40.00th=[12256], 50.00th=[12911], 60.00th=[13566], 00:27:35.427 | 70.00th=[14091], 80.00th=[14877], 90.00th=[15795], 95.00th=[17171], 00:27:35.427 | 99.00th=[18744], 99.50th=[20055], 99.90th=[25560], 99.95th=[26084], 00:27:35.427 | 99.99th=[26346] 00:27:35.427 bw ( KiB/s): min=55808, max=70208, per=90.39%, avg=62704.00, stdev=6851.36, samples=4 00:27:35.427 iops : min= 3488, max= 4388, avg=3919.00, stdev=428.21, samples=4 00:27:35.427 lat (msec) : 4=0.06%, 10=36.17%, 20=63.59%, 50=0.18% 00:27:35.427 cpu : usr=74.83%, sys=17.90%, ctx=71, majf=0, minf=2225 00:27:35.427 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:35.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:35.427 issued rwts: total=14811,7895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:35.427 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:35.427 00:27:35.427 Run status group 0 (all jobs): 00:27:35.427 READ: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=231MiB (243MB), run=2007-2007msec 00:27:35.427 WRITE: bw=67.7MiB/s (71.0MB/s), 67.7MiB/s-67.7MiB/s (71.0MB/s-71.0MB/s), io=123MiB (129MB), run=1821-1821msec 00:27:35.686 ----------------------------------------------------- 00:27:35.686 Suppressions used: 00:27:35.686 count bytes template 00:27:35.686 1 57 /usr/src/fio/parse.c 00:27:35.686 665 63840 /usr/src/fio/iolog.c 00:27:35.686 1 8 libtcmalloc_minimal.so 00:27:35.686 ----------------------------------------------------- 00:27:35.686 00:27:35.686 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:35.945 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:27:35.945 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:27:35.945 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:27:35.945 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:35.945 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:27:35.945 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:35.945 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:35.945 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:35.945 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:27:35.945 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:27:35.945 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:27:36.512 Nvme0n1 00:27:36.512 09:59:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:36.772 09:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=7cd5e7a3-2b25-4cd0-b4d5-9f025b935b67 00:27:36.772 09:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 7cd5e7a3-2b25-4cd0-b4d5-9f025b935b67 00:27:36.772 09:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=7cd5e7a3-2b25-4cd0-b4d5-9f025b935b67 00:27:36.772 09:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:27:36.772 09:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:27:36.772 09:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:27:36.772 09:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:36.772 09:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:27:36.772 { 00:27:36.772 "base_bdev": "Nvme0n1", 00:27:36.772 "block_size": 4096, 00:27:36.772 "cluster_size": 1073741824, 00:27:36.772 "free_clusters": 4, 00:27:36.772 "name": "lvs_0", 00:27:36.772 "total_data_clusters": 4, 00:27:36.772 "uuid": "7cd5e7a3-2b25-4cd0-b4d5-9f025b935b67" 00:27:36.772 } 00:27:36.772 ]' 00:27:36.772 09:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="7cd5e7a3-2b25-4cd0-b4d5-9f025b935b67") .free_clusters' 00:27:37.031 09:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=4 00:27:37.031 09:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="7cd5e7a3-2b25-4cd0-b4d5-9f025b935b67") .cluster_size' 00:27:37.031 09:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:27:37.031 09:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4096 00:27:37.031 4096 00:27:37.031 09:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4096 00:27:37.031 09:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:27:37.290 31b49e65-00a7-4290-918d-32c50964c8df 00:27:37.290 09:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:37.290 09:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:37.549 09:59:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:27:37.808 09:59:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:27:37.808 09:59:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:27:37.808 09:59:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:37.808 09:59:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:37.808 09:59:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:37.808 09:59:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:37.808 09:59:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:37.808 09:59:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:37.808 09:59:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:37.808 09:59:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:37.808 09:59:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:37.808 09:59:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:37.808 09:59:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:37.808 09:59:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:37.808 09:59:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:27:37.808 09:59:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:27:37.808 09:59:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:27:38.066 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:38.066 fio-3.35 00:27:38.066 Starting 1 thread 00:27:40.600 00:27:40.600 test: (groupid=0, jobs=1): err= 0: pid=98173: Thu Dec 5 09:59:27 2024 00:27:40.600 read: IOPS=4993, BW=19.5MiB/s (20.5MB/s)(39.2MiB/2010msec) 00:27:40.600 slat (usec): min=2, max=234, avg= 3.59, stdev= 4.45 00:27:40.600 clat (usec): min=5621, max=23100, avg=13466.58, stdev=1355.90 00:27:40.600 lat (usec): min=5627, max=23102, avg=13470.16, stdev=1355.73 00:27:40.600 clat percentiles (usec): 00:27:40.600 | 1.00th=[10814], 5.00th=[11600], 10.00th=[11994], 20.00th=[12387], 00:27:40.600 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13698], 00:27:40.600 | 70.00th=[13960], 80.00th=[14484], 90.00th=[15008], 95.00th=[15664], 00:27:40.600 | 99.00th=[17433], 99.50th=[18482], 99.90th=[21627], 99.95th=[21890], 00:27:40.600 | 99.99th=[22938] 00:27:40.600 bw ( KiB/s): min=18608, max=20608, per=99.82%, avg=19936.00, stdev=942.87, samples=4 00:27:40.600 iops : min= 4652, max= 5152, avg=4984.00, stdev=235.72, samples=4 00:27:40.600 write: IOPS=4986, BW=19.5MiB/s (20.4MB/s)(39.1MiB/2010msec); 0 zone resets 00:27:40.600 slat (usec): min=2, max=218, avg= 3.77, stdev= 4.14 00:27:40.600 clat (usec): min=2493, max=21785, avg=12025.08, stdev=1239.23 00:27:40.600 lat (usec): min=2501, max=21788, avg=12028.85, stdev=1239.17 00:27:40.600 clat percentiles (usec): 00:27:40.600 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:27:40.600 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:27:40.600 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13435], 95.00th=[13829], 00:27:40.600 | 99.00th=[15926], 99.50th=[16909], 99.90th=[18744], 99.95th=[20317], 00:27:40.600 | 99.99th=[21365] 00:27:40.600 bw ( KiB/s): min=19544, max=20464, per=99.91%, avg=19926.00, stdev=433.10, samples=4 00:27:40.600 iops : min= 4886, max= 5116, avg=4981.50, stdev=108.28, samples=4 00:27:40.600 lat (msec) : 4=0.02%, 10=1.35%, 20=98.48%, 50=0.14% 00:27:40.600 cpu : usr=73.37%, sys=20.21%, ctx=24, majf=0, minf=1553 00:27:40.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:27:40.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:40.600 issued rwts: total=10036,10022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:40.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:40.600 00:27:40.600 Run status group 0 (all jobs): 00:27:40.600 READ: bw=19.5MiB/s (20.5MB/s), 19.5MiB/s-19.5MiB/s (20.5MB/s-20.5MB/s), io=39.2MiB (41.1MB), run=2010-2010msec 00:27:40.600 WRITE: bw=19.5MiB/s (20.4MB/s), 19.5MiB/s-19.5MiB/s (20.4MB/s-20.4MB/s), io=39.1MiB (41.1MB), run=2010-2010msec 00:27:40.600 ----------------------------------------------------- 00:27:40.600 Suppressions used: 00:27:40.600 count bytes template 00:27:40.600 1 58 /usr/src/fio/parse.c 00:27:40.600 1 8 libtcmalloc_minimal.so 00:27:40.600 ----------------------------------------------------- 00:27:40.600 00:27:40.600 09:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:40.859 09:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:41.118 09:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=29ae1894-71fa-41ff-9be4-c5903f83a23b 00:27:41.118 09:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 29ae1894-71fa-41ff-9be4-c5903f83a23b 00:27:41.118 09:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=29ae1894-71fa-41ff-9be4-c5903f83a23b 00:27:41.118 09:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:27:41.118 09:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:27:41.118 09:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:27:41.118 09:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:41.687 09:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:27:41.687 { 00:27:41.687 "base_bdev": "Nvme0n1", 00:27:41.687 "block_size": 4096, 00:27:41.687 "cluster_size": 1073741824, 00:27:41.687 "free_clusters": 0, 00:27:41.687 "name": "lvs_0", 00:27:41.687 "total_data_clusters": 4, 00:27:41.687 "uuid": "7cd5e7a3-2b25-4cd0-b4d5-9f025b935b67" 00:27:41.687 }, 00:27:41.687 { 00:27:41.687 "base_bdev": "31b49e65-00a7-4290-918d-32c50964c8df", 00:27:41.687 "block_size": 4096, 00:27:41.687 "cluster_size": 4194304, 00:27:41.687 "free_clusters": 1022, 00:27:41.687 "name": "lvs_n_0", 00:27:41.687 "total_data_clusters": 1022, 00:27:41.687 "uuid": "29ae1894-71fa-41ff-9be4-c5903f83a23b" 00:27:41.687 } 00:27:41.687 ]' 00:27:41.687 09:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="29ae1894-71fa-41ff-9be4-c5903f83a23b") .free_clusters' 00:27:41.687 09:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1022 00:27:41.687 09:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="29ae1894-71fa-41ff-9be4-c5903f83a23b") .cluster_size' 00:27:41.687 4088 00:27:41.687 09:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:27:41.687 09:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4088 00:27:41.687 09:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4088 00:27:41.687 09:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:27:41.946 2d87396c-8b0a-41aa-8bdd-319c4dc3515f 00:27:41.946 09:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:42.205 09:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:42.465 09:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:27:42.725 09:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:27:42.725 09:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:27:42.725 09:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:42.725 09:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:42.725 09:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:42.725 09:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:42.725 09:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:42.725 09:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:42.725 09:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:42.725 09:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:42.725 09:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:42.725 09:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:42.725 09:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:42.725 09:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:42.725 09:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:27:42.725 09:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:27:42.725 09:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:27:42.725 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:42.725 fio-3.35 00:27:42.725 Starting 1 thread 00:27:45.259 00:27:45.259 test: (groupid=0, jobs=1): err= 0: pid=98285: Thu Dec 5 09:59:32 2024 00:27:45.259 read: IOPS=4407, BW=17.2MiB/s (18.1MB/s)(34.6MiB/2012msec) 00:27:45.259 slat (usec): min=2, max=368, avg= 3.67, stdev= 6.08 00:27:45.259 clat (usec): min=6261, max=27706, avg=15300.91, stdev=1436.20 00:27:45.259 lat (usec): min=6273, max=27708, avg=15304.58, stdev=1435.81 00:27:45.259 clat percentiles (usec): 00:27:45.259 | 1.00th=[12387], 5.00th=[13304], 10.00th=[13698], 20.00th=[14222], 00:27:45.259 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15270], 60.00th=[15533], 00:27:45.259 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17433], 00:27:45.259 | 99.00th=[18744], 99.50th=[19792], 99.90th=[25560], 99.95th=[27395], 00:27:45.259 | 99.99th=[27657] 00:27:45.259 bw ( KiB/s): min=16878, max=17944, per=99.86%, avg=17605.50, stdev=489.77, samples=4 00:27:45.259 iops : min= 4219, max= 4486, avg=4401.25, stdev=122.69, samples=4 00:27:45.259 write: IOPS=4416, BW=17.2MiB/s (18.1MB/s)(34.7MiB/2012msec); 0 zone resets 00:27:45.259 slat (usec): min=2, max=293, avg= 3.82, stdev= 4.85 00:27:45.259 clat (usec): min=3017, max=25494, avg=13620.81, stdev=1275.68 00:27:45.259 lat (usec): min=3085, max=25497, avg=13624.63, stdev=1275.39 00:27:45.259 clat percentiles (usec): 00:27:45.259 | 1.00th=[10814], 5.00th=[11863], 10.00th=[12256], 20.00th=[12649], 00:27:45.259 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:27:45.259 | 70.00th=[14222], 80.00th=[14615], 90.00th=[15008], 95.00th=[15401], 00:27:45.259 | 99.00th=[16450], 99.50th=[16909], 99.90th=[23987], 99.95th=[24249], 00:27:45.259 | 99.99th=[25560] 00:27:45.259 bw ( KiB/s): min=17472, max=17780, per=99.80%, avg=17629.00, stdev=128.47, samples=4 00:27:45.259 iops : min= 4368, max= 4445, avg=4407.25, stdev=32.12, samples=4 00:27:45.259 lat (msec) : 4=0.02%, 10=0.34%, 20=99.28%, 50=0.37% 00:27:45.259 cpu : usr=77.18%, sys=17.40%, ctx=5, majf=0, minf=1553 00:27:45.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:27:45.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:45.259 issued rwts: total=8868,8885,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:45.259 00:27:45.259 Run status group 0 (all jobs): 00:27:45.259 READ: bw=17.2MiB/s (18.1MB/s), 17.2MiB/s-17.2MiB/s (18.1MB/s-18.1MB/s), io=34.6MiB (36.3MB), run=2012-2012msec 00:27:45.259 WRITE: bw=17.2MiB/s (18.1MB/s), 17.2MiB/s-17.2MiB/s (18.1MB/s-18.1MB/s), io=34.7MiB (36.4MB), run=2012-2012msec 00:27:45.518 ----------------------------------------------------- 00:27:45.518 Suppressions used: 00:27:45.518 count bytes template 00:27:45.518 1 58 /usr/src/fio/parse.c 00:27:45.518 1 8 libtcmalloc_minimal.so 00:27:45.518 ----------------------------------------------------- 00:27:45.518 00:27:45.518 09:59:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:45.777 09:59:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:27:45.777 09:59:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:46.035 09:59:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:46.294 09:59:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:46.552 09:59:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:46.811 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:47.379 rmmod nvme_tcp 00:27:47.379 rmmod nvme_fabrics 00:27:47.379 rmmod nvme_keyring 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 97859 ']' 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 97859 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 97859 ']' 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 97859 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97859 00:27:47.379 killing process with pid 97859 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97859' 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 97859 00:27:47.379 09:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 97859 00:27:48.752 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:48.752 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:48.752 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:48.752 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:27:48.752 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:48.752 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:48.752 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:27:48.752 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:48.752 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:48.752 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:48.752 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:27:48.753 ************************************ 00:27:48.753 END TEST nvmf_fio_host 00:27:48.753 ************************************ 00:27:48.753 00:27:48.753 real 0m22.099s 00:27:48.753 user 1m35.009s 00:27:48.753 sys 0m4.732s 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.753 ************************************ 00:27:48.753 START TEST nvmf_failover 00:27:48.753 ************************************ 00:27:48.753 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:49.011 * Looking for test storage... 00:27:49.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:49.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.011 --rc genhtml_branch_coverage=1 00:27:49.011 --rc genhtml_function_coverage=1 00:27:49.011 --rc genhtml_legend=1 00:27:49.011 --rc geninfo_all_blocks=1 00:27:49.011 --rc geninfo_unexecuted_blocks=1 00:27:49.011 00:27:49.011 ' 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:49.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.011 --rc genhtml_branch_coverage=1 00:27:49.011 --rc genhtml_function_coverage=1 00:27:49.011 --rc genhtml_legend=1 00:27:49.011 --rc geninfo_all_blocks=1 00:27:49.011 --rc geninfo_unexecuted_blocks=1 00:27:49.011 00:27:49.011 ' 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:49.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.011 --rc genhtml_branch_coverage=1 00:27:49.011 --rc genhtml_function_coverage=1 00:27:49.011 --rc genhtml_legend=1 00:27:49.011 --rc geninfo_all_blocks=1 00:27:49.011 --rc geninfo_unexecuted_blocks=1 00:27:49.011 00:27:49.011 ' 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:49.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.011 --rc genhtml_branch_coverage=1 00:27:49.011 --rc genhtml_function_coverage=1 00:27:49.011 --rc genhtml_legend=1 00:27:49.011 --rc geninfo_all_blocks=1 00:27:49.011 --rc geninfo_unexecuted_blocks=1 00:27:49.011 00:27:49.011 ' 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.011 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:49.012 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:49.012 Cannot find device "nvmf_init_br" 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:49.012 Cannot find device "nvmf_init_br2" 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:49.012 Cannot find device "nvmf_tgt_br" 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:49.012 Cannot find device "nvmf_tgt_br2" 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:27:49.012 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:49.271 Cannot find device "nvmf_init_br" 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:49.271 Cannot find device "nvmf_init_br2" 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:49.271 Cannot find device "nvmf_tgt_br" 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:49.271 Cannot find device "nvmf_tgt_br2" 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:49.271 Cannot find device "nvmf_br" 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:49.271 Cannot find device "nvmf_init_if" 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:49.271 Cannot find device "nvmf_init_if2" 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:49.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:49.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:49.271 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:49.530 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:49.530 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:27:49.530 00:27:49.530 --- 10.0.0.3 ping statistics --- 00:27:49.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.530 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:49.530 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:49.530 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:27:49.530 00:27:49.530 --- 10.0.0.4 ping statistics --- 00:27:49.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.530 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:49.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:49.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:27:49.530 00:27:49.530 --- 10.0.0.1 ping statistics --- 00:27:49.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.530 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:49.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:49.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:27:49.530 00:27:49.530 --- 10.0.0.2 ping statistics --- 00:27:49.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.530 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=98628 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 98628 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 98628 ']' 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:49.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:49.530 09:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:49.530 [2024-12-05 09:59:37.119435] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:27:49.530 [2024-12-05 09:59:37.119603] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:49.789 [2024-12-05 09:59:37.307428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:50.048 [2024-12-05 09:59:37.477329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:50.048 [2024-12-05 09:59:37.477402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:50.048 [2024-12-05 09:59:37.477438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:50.048 [2024-12-05 09:59:37.477454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:50.048 [2024-12-05 09:59:37.477474] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:50.048 [2024-12-05 09:59:37.480021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:50.048 [2024-12-05 09:59:37.480157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.048 [2024-12-05 09:59:37.480157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:50.614 09:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:50.614 09:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:50.614 09:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:50.614 09:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:50.614 09:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:50.615 09:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:50.615 09:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:50.874 [2024-12-05 09:59:38.394439] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.874 09:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:51.132 Malloc0 00:27:51.390 09:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:51.649 09:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:51.649 09:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:52.217 [2024-12-05 09:59:39.547668] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:52.217 09:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:27:52.217 [2024-12-05 09:59:39.763818] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:52.217 09:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:27:52.487 [2024-12-05 09:59:39.992411] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:27:52.487 09:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=98744 00:27:52.487 09:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:52.487 09:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:52.487 09:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 98744 /var/tmp/bdevperf.sock 00:27:52.487 09:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 98744 ']' 00:27:52.487 09:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:52.487 09:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:52.487 09:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:52.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:52.487 09:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:52.487 09:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:53.917 09:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:53.917 09:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:53.917 09:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:53.917 NVMe0n1 00:27:53.917 09:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:54.176 00:27:54.176 09:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=98787 00:27:54.176 09:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:54.176 09:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:27:55.111 09:59:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:55.369 [2024-12-05 09:59:42.975832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.975954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.975978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.975990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.369 [2024-12-05 09:59:42.976238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.370 [2024-12-05 09:59:42.976878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.976888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.976902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.976911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.976920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.976930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.976956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.976989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.371 [2024-12-05 09:59:42.977255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:27:55.628 09:59:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:27:58.911 09:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:58.911 00:27:58.911 09:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:27:59.170 [2024-12-05 09:59:46.568783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.568851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.568865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.568875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.568885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.568894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.568905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.568914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.568925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.568934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.568974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.568986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.568997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.170 [2024-12-05 09:59:46.569456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.171 [2024-12-05 09:59:46.569466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.171 [2024-12-05 09:59:46.569476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.171 [2024-12-05 09:59:46.569485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.171 [2024-12-05 09:59:46.569495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.171 [2024-12-05 09:59:46.569504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.171 [2024-12-05 09:59:46.569513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.171 [2024-12-05 09:59:46.569523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.171 [2024-12-05 09:59:46.569532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.171 [2024-12-05 09:59:46.569542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.171 [2024-12-05 09:59:46.569551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.171 [2024-12-05 09:59:46.569562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:27:59.171 09:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:28:02.471 09:59:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:02.471 [2024-12-05 09:59:49.860292] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:02.471 09:59:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:28:03.406 09:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:28:03.664 09:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 98787 00:28:10.234 { 00:28:10.234 "results": [ 00:28:10.234 { 00:28:10.234 "job": "NVMe0n1", 00:28:10.234 "core_mask": "0x1", 00:28:10.234 "workload": "verify", 00:28:10.234 "status": "finished", 00:28:10.234 "verify_range": { 00:28:10.234 "start": 0, 00:28:10.234 "length": 16384 00:28:10.234 }, 00:28:10.234 "queue_depth": 128, 00:28:10.234 "io_size": 4096, 00:28:10.234 "runtime": 15.008072, 00:28:10.234 "iops": 8294.07001778776, 00:28:10.234 "mibps": 32.39871100698344, 00:28:10.234 "io_failed": 3573, 00:28:10.234 "io_timeout": 0, 00:28:10.234 "avg_latency_us": 14973.099290296977, 00:28:10.234 "min_latency_us": 2070.3418181818183, 00:28:10.234 "max_latency_us": 25022.836363636365 00:28:10.234 } 00:28:10.234 ], 00:28:10.234 "core_count": 1 00:28:10.234 } 00:28:10.234 09:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 98744 00:28:10.234 09:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 98744 ']' 00:28:10.234 09:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 98744 00:28:10.234 09:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:28:10.234 09:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:10.234 09:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98744 00:28:10.234 killing process with pid 98744 00:28:10.234 09:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:10.234 09:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:10.234 09:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98744' 00:28:10.234 09:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 98744 00:28:10.234 09:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 98744 00:28:10.234 09:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:10.234 [2024-12-05 09:59:40.105479] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:28:10.234 [2024-12-05 09:59:40.105648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98744 ] 00:28:10.234 [2024-12-05 09:59:40.283695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.234 [2024-12-05 09:59:40.437009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.234 Running I/O for 15 seconds... 00:28:10.234 8542.00 IOPS, 33.37 MiB/s [2024-12-05T09:59:57.863Z] [2024-12-05 09:59:42.980498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.234 [2024-12-05 09:59:42.980819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.234 [2024-12-05 09:59:42.981030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.234 [2024-12-05 09:59:42.981169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.234 [2024-12-05 09:59:42.981267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.234 [2024-12-05 09:59:42.981397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.234 [2024-12-05 09:59:42.981481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.234 [2024-12-05 09:59:42.981570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.234 [2024-12-05 09:59:42.981675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.234 [2024-12-05 09:59:42.981768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.234 [2024-12-05 09:59:42.981852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.234 [2024-12-05 09:59:42.981964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.234 [2024-12-05 09:59:42.982065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.234 [2024-12-05 09:59:42.982164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.234 [2024-12-05 09:59:42.982266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.234 [2024-12-05 09:59:42.982378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.234 [2024-12-05 09:59:42.982467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.234 [2024-12-05 09:59:42.982587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.234 [2024-12-05 09:59:42.982667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.234 [2024-12-05 09:59:42.982758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.234 [2024-12-05 09:59:42.982837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.234 [2024-12-05 09:59:42.982924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.234 [2024-12-05 09:59:42.983054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.234 [2024-12-05 09:59:42.983143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.983223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.983308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.983387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.983473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.983553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.983644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.983725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.983810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.983890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.983996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.984080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.984169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.984242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.984329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.984400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.984480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.984560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.984650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.984729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.984813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.984885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.984983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.985053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.985095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.985133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.985172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.985210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.985248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.985285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.985323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.985361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.985399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.985437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.985475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.235 [2024-12-05 09:59:42.985514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.235 [2024-12-05 09:59:42.985561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.235 [2024-12-05 09:59:42.985599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.235 [2024-12-05 09:59:42.985636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.985673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.985710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.985746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.235 [2024-12-05 09:59:42.985765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.235 [2024-12-05 09:59:42.985783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.985818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.985837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.985855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.985873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.985891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.985909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.985948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.985974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.985994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.986981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.986999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-12-05 09:59:42.987017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.236 [2024-12-05 09:59:42.987035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.237 [2024-12-05 09:59:42.987834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.237 [2024-12-05 09:59:42.987871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.237 [2024-12-05 09:59:42.987908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.237 [2024-12-05 09:59:42.987954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.987986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.237 [2024-12-05 09:59:42.988006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.988024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.237 [2024-12-05 09:59:42.988043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.988069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.237 [2024-12-05 09:59:42.988096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.988117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.237 [2024-12-05 09:59:42.988136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.988155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.237 [2024-12-05 09:59:42.988173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.988192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.237 [2024-12-05 09:59:42.988211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.237 [2024-12-05 09:59:42.988230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.237 [2024-12-05 09:59:42.988248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.988267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.988285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.988304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.988322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.988341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.988359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.988392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.988411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.988431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.988450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.988469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.988487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.988505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.988523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.988541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.988567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.988588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.988608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.988626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.988645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.988664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.988682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.988702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.988725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.988745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.988763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.988782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.988800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.988819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.988838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.988857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.988875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.988894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.988912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.988944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.988966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.988985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.989004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.989023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.989042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.989061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.989088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.989108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.989127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.989145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.238 [2024-12-05 09:59:42.989164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.989207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:10.238 [2024-12-05 09:59:42.989228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:10.238 [2024-12-05 09:59:42.989244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78720 len:8 PRP1 0x0 PRP2 0x0 00:28:10.238 [2024-12-05 09:59:42.989261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.989512] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:28:10.238 [2024-12-05 09:59:42.989584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.238 [2024-12-05 09:59:42.989617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.989638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.238 [2024-12-05 09:59:42.989656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.989678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.238 [2024-12-05 09:59:42.989696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.989713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.238 [2024-12-05 09:59:42.989731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.238 [2024-12-05 09:59:42.989747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:10.238 [2024-12-05 09:59:42.989827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:10.238 [2024-12-05 09:59:42.993281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:10.238 [2024-12-05 09:59:43.020370] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:28:10.238 8144.00 IOPS, 31.81 MiB/s [2024-12-05T09:59:57.867Z] 8177.33 IOPS, 31.94 MiB/s [2024-12-05T09:59:57.868Z] 8142.25 IOPS, 31.81 MiB/s [2024-12-05T09:59:57.868Z] [2024-12-05 09:59:46.569810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.569864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.569902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:54552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.569924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:54592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:54600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:54640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:54648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:54712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.570954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.570990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.571023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.571044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:54752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.571064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.239 [2024-12-05 09:59:46.571093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.239 [2024-12-05 09:59:46.571113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.571930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.571980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.572012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.572034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.572054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.572074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.572093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.572113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.572132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.572160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.572181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.572201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.240 [2024-12-05 09:59:46.572220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.240 [2024-12-05 09:59:46.572239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.241 [2024-12-05 09:59:46.572266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.572286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.241 [2024-12-05 09:59:46.572319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.572338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.241 [2024-12-05 09:59:46.572357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.572376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.241 [2024-12-05 09:59:46.572410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.572430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.241 [2024-12-05 09:59:46.572448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.572466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.241 [2024-12-05 09:59:46.572484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.572503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.241 [2024-12-05 09:59:46.572521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.572540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.241 [2024-12-05 09:59:46.572558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.572577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.241 [2024-12-05 09:59:46.572595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.572614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.241 [2024-12-05 09:59:46.572632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.572650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.241 [2024-12-05 09:59:46.572668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.572700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.572720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.572739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.572757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.572776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.572793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.572812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.572830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.572848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.572866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.572884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.572902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.572920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.572954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.573001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.573023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.573043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.573063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.573082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.573101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.573120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.573139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.573159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.573178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.573197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.573226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.573280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.573331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.573351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.573370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.573390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.573408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.573430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.573450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.573469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.573488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.573507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.573526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.573546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.573564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.573584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.241 [2024-12-05 09:59:46.573602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.241 [2024-12-05 09:59:46.573622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.573641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.573661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.573679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.573699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.573717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.573737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.573756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.573783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.573820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.573840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.573866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.573888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.573907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.573928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.573964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.573999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.574931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.574985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.575012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.575044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.575066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.242 [2024-12-05 09:59:46.575087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.242 [2024-12-05 09:59:46.575106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:46.575126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.243 [2024-12-05 09:59:46.575145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:46.575164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.243 [2024-12-05 09:59:46.575183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:46.575204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.243 [2024-12-05 09:59:46.575223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:46.575243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.243 [2024-12-05 09:59:46.575262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:46.575327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:10.243 [2024-12-05 09:59:46.575370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55520 len:8 PRP1 0x0 PRP2 0x0 00:28:10.243 [2024-12-05 09:59:46.575388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:46.575411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:10.243 [2024-12-05 09:59:46.575426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:10.243 [2024-12-05 09:59:46.575440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55528 len:8 PRP1 0x0 PRP2 0x0 00:28:10.243 [2024-12-05 09:59:46.575457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:46.575473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:10.243 [2024-12-05 09:59:46.575485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:10.243 [2024-12-05 09:59:46.575498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55536 len:8 PRP1 0x0 PRP2 0x0 00:28:10.243 [2024-12-05 09:59:46.575514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:46.575529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:10.243 [2024-12-05 09:59:46.575541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:10.243 [2024-12-05 09:59:46.575555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55544 len:8 PRP1 0x0 PRP2 0x0 00:28:10.243 [2024-12-05 09:59:46.575581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:46.575599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:10.243 [2024-12-05 09:59:46.575612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:10.243 [2024-12-05 09:59:46.575625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55552 len:8 PRP1 0x0 PRP2 0x0 00:28:10.243 [2024-12-05 09:59:46.575641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:46.575656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:10.243 [2024-12-05 09:59:46.575669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:10.243 [2024-12-05 09:59:46.575682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55560 len:8 PRP1 0x0 PRP2 0x0 00:28:10.243 [2024-12-05 09:59:46.575698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:46.575932] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:28:10.243 [2024-12-05 09:59:46.576059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.243 [2024-12-05 09:59:46.576086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:46.576107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.243 [2024-12-05 09:59:46.576126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:46.576144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.243 [2024-12-05 09:59:46.576162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:46.576180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.243 [2024-12-05 09:59:46.576198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:46.576215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:28:10.243 [2024-12-05 09:59:46.576282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:10.243 [2024-12-05 09:59:46.579885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:10.243 [2024-12-05 09:59:46.607911] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:28:10.243 8052.60 IOPS, 31.46 MiB/s [2024-12-05T09:59:57.872Z] 8044.17 IOPS, 31.42 MiB/s [2024-12-05T09:59:57.872Z] 8076.86 IOPS, 31.55 MiB/s [2024-12-05T09:59:57.872Z] 8074.00 IOPS, 31.54 MiB/s [2024-12-05T09:59:57.872Z] 8062.89 IOPS, 31.50 MiB/s [2024-12-05T09:59:57.872Z] [2024-12-05 09:59:51.090491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.243 [2024-12-05 09:59:51.090609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:51.090649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.243 [2024-12-05 09:59:51.090670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:51.090691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.243 [2024-12-05 09:59:51.090735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:51.090756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.243 [2024-12-05 09:59:51.090775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:51.090793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.243 [2024-12-05 09:59:51.090812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:51.090830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.243 [2024-12-05 09:59:51.090848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:51.090867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.243 [2024-12-05 09:59:51.090885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:51.090904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.243 [2024-12-05 09:59:51.090922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:51.090958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.243 [2024-12-05 09:59:51.091008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.243 [2024-12-05 09:59:51.091030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.243 [2024-12-05 09:59:51.091050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.091958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.091994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.092027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.092050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.092070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.092091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.092110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.092131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.092150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.092170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.244 [2024-12-05 09:59:51.092190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.092211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.244 [2024-12-05 09:59:51.092230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.092252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.244 [2024-12-05 09:59:51.092271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.092322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.244 [2024-12-05 09:59:51.092356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.092375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.244 [2024-12-05 09:59:51.092393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.244 [2024-12-05 09:59:51.092412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.092453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.092475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.092494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.092512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.092530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.092549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.092567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.092586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.092604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.092623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.092641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.092660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.092678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.092697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.092715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.092734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.092752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.092770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.092788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.092807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.092825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.092844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.092862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.092881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.092899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.092925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.092979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.092999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.093050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.093073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.093093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.093113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.093132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.093153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.093173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.093193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.093212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.093232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.093252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.093272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.093291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.093328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.093361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.093396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.093414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.093433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.093451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.093469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.245 [2024-12-05 09:59:51.093487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.245 [2024-12-05 09:59:51.093506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.093532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.093552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.093570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.093589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.093618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.093638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.093657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.093675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.093693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.093711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.093729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.093748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.093766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.093784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.093803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.093822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.093841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.093860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.246 [2024-12-05 09:59:51.093877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.093896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.246 [2024-12-05 09:59:51.093914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.093933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.246 [2024-12-05 09:59:51.093968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.093987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.246 [2024-12-05 09:59:51.094019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.246 [2024-12-05 09:59:51.094085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.246 [2024-12-05 09:59:51.094125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.094164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.094202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.094289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.094330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.094386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.094426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.094467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.246 [2024-12-05 09:59:51.094507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.094547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.094617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.094664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.094705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.094742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.094781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.094819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.094856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.246 [2024-12-05 09:59:51.094895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.246 [2024-12-05 09:59:51.094914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.094932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.094951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.094971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.095977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.095996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.096015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.096034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.096054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.096072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.096092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.247 [2024-12-05 09:59:51.096111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.247 [2024-12-05 09:59:51.096129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:28:10.247 [2024-12-05 09:59:51.096160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:10.248 [2024-12-05 09:59:51.096175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:10.248 [2024-12-05 09:59:51.096190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92152 len:8 PRP1 0x0 PRP2 0x0 00:28:10.248 [2024-12-05 09:59:51.096216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.248 [2024-12-05 09:59:51.096460] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:28:10.248 [2024-12-05 09:59:51.096531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.248 [2024-12-05 09:59:51.096558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.248 [2024-12-05 09:59:51.096578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.248 [2024-12-05 09:59:51.096596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.248 [2024-12-05 09:59:51.096614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.248 [2024-12-05 09:59:51.096633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.248 [2024-12-05 09:59:51.096651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.248 [2024-12-05 09:59:51.096669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.248 [2024-12-05 09:59:51.096686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:28:10.248 [2024-12-05 09:59:51.096735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:10.248 [2024-12-05 09:59:51.100258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:28:10.248 [2024-12-05 09:59:51.128335] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:28:10.248 8070.10 IOPS, 31.52 MiB/s [2024-12-05T09:59:57.877Z] 8125.73 IOPS, 31.74 MiB/s [2024-12-05T09:59:57.877Z] 8175.58 IOPS, 31.94 MiB/s [2024-12-05T09:59:57.877Z] 8216.85 IOPS, 32.10 MiB/s [2024-12-05T09:59:57.877Z] 8259.71 IOPS, 32.26 MiB/s [2024-12-05T09:59:57.877Z] 8293.13 IOPS, 32.40 MiB/s 00:28:10.248 Latency(us) 00:28:10.248 [2024-12-05T09:59:57.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.248 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:10.248 Verification LBA range: start 0x0 length 0x4000 00:28:10.248 NVMe0n1 : 15.01 8294.07 32.40 238.07 0.00 14973.10 2070.34 25022.84 00:28:10.248 [2024-12-05T09:59:57.877Z] =================================================================================================================== 00:28:10.248 [2024-12-05T09:59:57.877Z] Total : 8294.07 32.40 238.07 0.00 14973.10 2070.34 25022.84 00:28:10.248 Received shutdown signal, test time was about 15.000000 seconds 00:28:10.248 00:28:10.248 Latency(us) 00:28:10.248 [2024-12-05T09:59:57.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.248 [2024-12-05T09:59:57.877Z] =================================================================================================================== 00:28:10.248 [2024-12-05T09:59:57.877Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:10.248 09:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:10.248 09:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:28:10.248 09:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:28:10.248 09:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=98992 00:28:10.248 09:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:10.248 09:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 98992 /var/tmp/bdevperf.sock 00:28:10.248 09:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 98992 ']' 00:28:10.248 09:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:10.248 09:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:10.248 09:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:10.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:10.248 09:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:10.248 09:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:11.625 09:59:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:11.625 09:59:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:28:11.625 09:59:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:28:11.625 [2024-12-05 09:59:59.048153] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:28:11.625 09:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:28:11.884 [2024-12-05 09:59:59.272448] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:28:11.884 09:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:12.143 NVMe0n1 00:28:12.143 09:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:12.401 00:28:12.401 09:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:12.659 00:28:12.659 10:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:28:12.659 10:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:13.227 10:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:13.227 10:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:28:16.510 10:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:16.510 10:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:28:16.510 10:00:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:16.510 10:00:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=99130 00:28:16.510 10:00:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 99130 00:28:17.885 { 00:28:17.885 "results": [ 00:28:17.885 { 00:28:17.885 "job": "NVMe0n1", 00:28:17.885 "core_mask": "0x1", 00:28:17.885 "workload": "verify", 00:28:17.885 "status": "finished", 00:28:17.885 "verify_range": { 00:28:17.885 "start": 0, 00:28:17.885 "length": 16384 00:28:17.885 }, 00:28:17.885 "queue_depth": 128, 00:28:17.885 "io_size": 4096, 00:28:17.885 "runtime": 1.012438, 00:28:17.885 "iops": 8674.111402377232, 00:28:17.885 "mibps": 33.88324766553606, 00:28:17.885 "io_failed": 0, 00:28:17.885 "io_timeout": 0, 00:28:17.885 "avg_latency_us": 14693.448738535435, 00:28:17.885 "min_latency_us": 2561.8618181818183, 00:28:17.885 "max_latency_us": 16205.265454545455 00:28:17.885 } 00:28:17.885 ], 00:28:17.885 "core_count": 1 00:28:17.885 } 00:28:17.885 10:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:17.885 [2024-12-05 09:59:57.922650] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:28:17.885 [2024-12-05 09:59:57.922835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98992 ] 00:28:17.885 [2024-12-05 09:59:58.106696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.885 [2024-12-05 09:59:58.222681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.885 [2024-12-05 10:00:00.797685] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:28:17.885 [2024-12-05 10:00:00.797876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.885 [2024-12-05 10:00:00.797918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.885 [2024-12-05 10:00:00.797978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.885 [2024-12-05 10:00:00.798001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.885 [2024-12-05 10:00:00.798022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.885 [2024-12-05 10:00:00.798041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.885 [2024-12-05 10:00:00.798062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.885 [2024-12-05 10:00:00.798082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.885 [2024-12-05 10:00:00.798102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:28:17.885 [2024-12-05 10:00:00.798199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:28:17.885 [2024-12-05 10:00:00.798276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:17.885 [2024-12-05 10:00:00.807561] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:28:17.885 Running I/O for 1 seconds... 00:28:17.885 8623.00 IOPS, 33.68 MiB/s 00:28:17.885 Latency(us) 00:28:17.885 [2024-12-05T10:00:05.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.885 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:17.885 Verification LBA range: start 0x0 length 0x4000 00:28:17.885 NVMe0n1 : 1.01 8674.11 33.88 0.00 0.00 14693.45 2561.86 16205.27 00:28:17.885 [2024-12-05T10:00:05.514Z] =================================================================================================================== 00:28:17.885 [2024-12-05T10:00:05.514Z] Total : 8674.11 33.88 0.00 0.00 14693.45 2561.86 16205.27 00:28:17.885 10:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:17.885 10:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:28:18.142 10:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:18.400 10:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:18.400 10:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:28:18.657 10:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:18.915 10:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:28:22.218 10:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:22.218 10:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:28:22.218 10:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 98992 00:28:22.218 10:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 98992 ']' 00:28:22.218 10:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 98992 00:28:22.218 10:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:28:22.218 10:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.218 10:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98992 00:28:22.218 killing process with pid 98992 00:28:22.218 10:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:22.218 10:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:22.218 10:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98992' 00:28:22.218 10:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 98992 00:28:22.218 10:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 98992 00:28:23.153 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:28:23.153 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:23.412 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:23.412 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:23.412 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:28:23.412 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:23.412 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:28:23.412 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:23.412 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:28:23.412 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:23.412 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:23.412 rmmod nvme_tcp 00:28:23.412 rmmod nvme_fabrics 00:28:23.412 rmmod nvme_keyring 00:28:23.412 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:23.412 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:28:23.412 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:28:23.412 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 98628 ']' 00:28:23.412 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 98628 00:28:23.412 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 98628 ']' 00:28:23.412 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 98628 00:28:23.412 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:28:23.412 10:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:23.412 10:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98628 00:28:23.412 killing process with pid 98628 00:28:23.412 10:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:23.412 10:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:23.412 10:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98628' 00:28:23.412 10:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 98628 00:28:23.412 10:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 98628 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:28:24.832 00:28:24.832 real 0m35.931s 00:28:24.832 user 2m17.473s 00:28:24.832 sys 0m4.933s 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:24.832 ************************************ 00:28:24.832 END TEST nvmf_failover 00:28:24.832 ************************************ 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.832 ************************************ 00:28:24.832 START TEST nvmf_host_discovery 00:28:24.832 ************************************ 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:24.832 * Looking for test storage... 00:28:24.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:28:24.832 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:25.092 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:25.092 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:25.092 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:25.092 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:25.092 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:28:25.092 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:28:25.092 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:28:25.092 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:28:25.092 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:28:25.092 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:28:25.092 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:25.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.093 --rc genhtml_branch_coverage=1 00:28:25.093 --rc genhtml_function_coverage=1 00:28:25.093 --rc genhtml_legend=1 00:28:25.093 --rc geninfo_all_blocks=1 00:28:25.093 --rc geninfo_unexecuted_blocks=1 00:28:25.093 00:28:25.093 ' 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:25.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.093 --rc genhtml_branch_coverage=1 00:28:25.093 --rc genhtml_function_coverage=1 00:28:25.093 --rc genhtml_legend=1 00:28:25.093 --rc geninfo_all_blocks=1 00:28:25.093 --rc geninfo_unexecuted_blocks=1 00:28:25.093 00:28:25.093 ' 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:25.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.093 --rc genhtml_branch_coverage=1 00:28:25.093 --rc genhtml_function_coverage=1 00:28:25.093 --rc genhtml_legend=1 00:28:25.093 --rc geninfo_all_blocks=1 00:28:25.093 --rc geninfo_unexecuted_blocks=1 00:28:25.093 00:28:25.093 ' 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:25.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.093 --rc genhtml_branch_coverage=1 00:28:25.093 --rc genhtml_function_coverage=1 00:28:25.093 --rc genhtml_legend=1 00:28:25.093 --rc geninfo_all_blocks=1 00:28:25.093 --rc geninfo_unexecuted_blocks=1 00:28:25.093 00:28:25.093 ' 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.093 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:25.094 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:25.094 Cannot find device "nvmf_init_br" 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:25.094 Cannot find device "nvmf_init_br2" 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:25.094 Cannot find device "nvmf_tgt_br" 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:25.094 Cannot find device "nvmf_tgt_br2" 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:25.094 Cannot find device "nvmf_init_br" 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:25.094 Cannot find device "nvmf_init_br2" 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:25.094 Cannot find device "nvmf_tgt_br" 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:25.094 Cannot find device "nvmf_tgt_br2" 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:25.094 Cannot find device "nvmf_br" 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:25.094 Cannot find device "nvmf_init_if" 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:25.094 Cannot find device "nvmf_init_if2" 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:25.094 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:25.094 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:25.094 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:25.355 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:25.355 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:28:25.355 00:28:25.355 --- 10.0.0.3 ping statistics --- 00:28:25.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.355 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:25.355 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:25.355 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:28:25.355 00:28:25.355 --- 10.0.0.4 ping statistics --- 00:28:25.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.355 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:25.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:25.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:28:25.355 00:28:25.355 --- 10.0.0.1 ping statistics --- 00:28:25.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.355 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:25.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:25.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:28:25.355 00:28:25.355 --- 10.0.0.2 ping statistics --- 00:28:25.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.355 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=99511 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 99511 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 99511 ']' 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.355 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:25.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.356 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.356 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:25.356 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:25.615 [2024-12-05 10:00:13.054114] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:28:25.616 [2024-12-05 10:00:13.054332] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.616 [2024-12-05 10:00:13.219594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.875 [2024-12-05 10:00:13.317831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.875 [2024-12-05 10:00:13.317901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.875 [2024-12-05 10:00:13.317935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.875 [2024-12-05 10:00:13.317972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.875 [2024-12-05 10:00:13.317985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.875 [2024-12-05 10:00:13.319073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.442 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:26.442 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:28:26.442 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:26.442 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:26.442 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:26.442 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.442 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:26.442 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.442 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:26.701 [2024-12-05 10:00:14.074506] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:26.701 [2024-12-05 10:00:14.082719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:26.701 null0 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:26.701 null1 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=99562 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 99562 /tmp/host.sock 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 99562 ']' 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:28:26.701 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:26.701 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:26.702 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:26.702 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:26.702 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:26.702 [2024-12-05 10:00:14.245053] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:28:26.702 [2024-12-05 10:00:14.245233] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99562 ] 00:28:26.960 [2024-12-05 10:00:14.436613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.960 [2024-12-05 10:00:14.584523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:27.530 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:27.788 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:28.046 [2024-12-05 10:00:15.499119] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:28.046 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:28.047 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:28.305 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:28.305 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:28.305 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.305 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:28.305 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:28.305 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:28.305 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:28.305 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.305 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:28:28.305 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:28:28.563 [2024-12-05 10:00:16.138967] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:28:28.563 [2024-12-05 10:00:16.139002] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:28:28.563 [2024-12-05 10:00:16.139059] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:28.822 [2024-12-05 10:00:16.225108] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:28:28.822 [2024-12-05 10:00:16.287638] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:28:28.822 [2024-12-05 10:00:16.288869] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b280:1 started. 00:28:28.822 [2024-12-05 10:00:16.291303] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:28:28.822 [2024-12-05 10:00:16.291339] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:28:28.822 [2024-12-05 10:00:16.297605] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b280 was disconnected and freed. delete nvme_qpair. 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:29.389 [2024-12-05 10:00:16.969796] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:29.389 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:29.389 [2024-12-05 10:00:16.978617] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:28:29.389 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:29.648 [2024-12-05 10:00:17.080823] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:28:29.648 [2024-12-05 10:00:17.081472] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:28:29.648 [2024-12-05 10:00:17.081524] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:29.648 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:29.649 [2024-12-05 10:00:17.167110] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.649 [2024-12-05 10:00:17.231598] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:28:29.649 [2024-12-05 10:00:17.231665] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:28:29.649 [2024-12-05 10:00:17.231682] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:28:29.649 [2024-12-05 10:00:17.231691] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:28:29.649 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:31.027 [2024-12-05 10:00:18.362346] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:28:31.027 [2024-12-05 10:00:18.362390] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:31.027 [2024-12-05 10:00:18.368037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:31.027 [2024-12-05 10:00:18.368085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.027 [2024-12-05 10:00:18.368111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:31.027 [2024-12-05 10:00:18.368123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.027 [2024-12-05 10:00:18.368135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:31.027 [2024-12-05 10:00:18.368146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.027 [2024-12-05 10:00:18.368158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:31.027 [2024-12-05 10:00:18.368169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.027 [2024-12-05 10:00:18.368181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:31.027 [2024-12-05 10:00:18.377996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:28:31.027 [2024-12-05 10:00:18.388007] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:31.027 [2024-12-05 10:00:18.388041] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:31.027 [2024-12-05 10:00:18.388052] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:31.027 [2024-12-05 10:00:18.388061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:31.027 [2024-12-05 10:00:18.388115] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:31.027 [2024-12-05 10:00:18.388195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.027 [2024-12-05 10:00:18.388225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.3, port=4420 00:28:31.027 [2024-12-05 10:00:18.388241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:28:31.027 [2024-12-05 10:00:18.388263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:28:31.027 [2024-12-05 10:00:18.388290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:31.027 [2024-12-05 10:00:18.388318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:31.027 [2024-12-05 10:00:18.388331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:31.027 [2024-12-05 10:00:18.388362] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:31.027 [2024-12-05 10:00:18.388376] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:31.027 [2024-12-05 10:00:18.388383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:31.027 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.027 [2024-12-05 10:00:18.398127] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:31.027 [2024-12-05 10:00:18.398160] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:31.027 [2024-12-05 10:00:18.398170] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:31.027 [2024-12-05 10:00:18.398177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:31.027 [2024-12-05 10:00:18.398210] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:31.027 [2024-12-05 10:00:18.398299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.027 [2024-12-05 10:00:18.398326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.3, port=4420 00:28:31.027 [2024-12-05 10:00:18.398341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:28:31.027 [2024-12-05 10:00:18.398363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:28:31.027 [2024-12-05 10:00:18.398381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:31.027 [2024-12-05 10:00:18.398393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:31.027 [2024-12-05 10:00:18.398405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:31.027 [2024-12-05 10:00:18.398425] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:31.027 [2024-12-05 10:00:18.398433] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:31.027 [2024-12-05 10:00:18.398440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:31.027 [2024-12-05 10:00:18.408224] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:31.027 [2024-12-05 10:00:18.408257] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:31.027 [2024-12-05 10:00:18.408266] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:31.027 [2024-12-05 10:00:18.408274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:31.027 [2024-12-05 10:00:18.408307] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:31.027 [2024-12-05 10:00:18.408380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.027 [2024-12-05 10:00:18.408407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.3, port=4420 00:28:31.027 [2024-12-05 10:00:18.408421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:28:31.027 [2024-12-05 10:00:18.408441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:28:31.027 [2024-12-05 10:00:18.408458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:31.027 [2024-12-05 10:00:18.408470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:31.027 [2024-12-05 10:00:18.408481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:31.027 [2024-12-05 10:00:18.408490] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:31.027 [2024-12-05 10:00:18.408498] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:31.027 [2024-12-05 10:00:18.408504] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:31.027 [2024-12-05 10:00:18.418323] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:31.027 [2024-12-05 10:00:18.418357] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:31.028 [2024-12-05 10:00:18.418367] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:31.028 [2024-12-05 10:00:18.418375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:31.028 [2024-12-05 10:00:18.418408] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:31.028 [2024-12-05 10:00:18.418472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.028 [2024-12-05 10:00:18.418497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.3, port=4420 00:28:31.028 [2024-12-05 10:00:18.418513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:28:31.028 [2024-12-05 10:00:18.418534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:28:31.028 [2024-12-05 10:00:18.418564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:31.028 [2024-12-05 10:00:18.418578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:31.028 [2024-12-05 10:00:18.418595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:31.028 [2024-12-05 10:00:18.418622] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:31.028 [2024-12-05 10:00:18.418630] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:31.028 [2024-12-05 10:00:18.418636] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:31.028 [2024-12-05 10:00:18.428421] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:31.028 [2024-12-05 10:00:18.428450] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:31.028 [2024-12-05 10:00:18.428459] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:31.028 [2024-12-05 10:00:18.428466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:31.028 [2024-12-05 10:00:18.428495] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:31.028 [2024-12-05 10:00:18.428552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.028 [2024-12-05 10:00:18.428576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.3, port=4420 00:28:31.028 [2024-12-05 10:00:18.428589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:28:31.028 [2024-12-05 10:00:18.428609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:28:31.028 [2024-12-05 10:00:18.428639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:31.028 [2024-12-05 10:00:18.428653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:31.028 [2024-12-05 10:00:18.428664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:31.028 [2024-12-05 10:00:18.428674] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:31.028 [2024-12-05 10:00:18.428681] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:31.028 [2024-12-05 10:00:18.428687] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:31.028 [2024-12-05 10:00:18.438516] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:31.028 [2024-12-05 10:00:18.438588] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:31.028 [2024-12-05 10:00:18.438599] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:31.028 [2024-12-05 10:00:18.438607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:31.028 [2024-12-05 10:00:18.438641] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:31.028 [2024-12-05 10:00:18.438719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.028 [2024-12-05 10:00:18.438745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.3, port=4420 00:28:31.028 [2024-12-05 10:00:18.438759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:28:31.028 [2024-12-05 10:00:18.438782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:28:31.028 [2024-12-05 10:00:18.438813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:31.028 [2024-12-05 10:00:18.438827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:31.028 [2024-12-05 10:00:18.438839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:31.028 [2024-12-05 10:00:18.438849] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:31.028 [2024-12-05 10:00:18.438857] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:31.028 [2024-12-05 10:00:18.438863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:31.028 [2024-12-05 10:00:18.448654] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:31.028 [2024-12-05 10:00:18.448684] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:31.028 [2024-12-05 10:00:18.448694] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:31.028 [2024-12-05 10:00:18.448701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:31.028 [2024-12-05 10:00:18.448732] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:31.028 [2024-12-05 10:00:18.448790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.028 [2024-12-05 10:00:18.448814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.3, port=4420 00:28:31.028 [2024-12-05 10:00:18.448828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:28:31.028 [2024-12-05 10:00:18.448859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:28:31.028 [2024-12-05 10:00:18.448945] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:28:31.028 [2024-12-05 10:00:18.448974] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:28:31.028 [2024-12-05 10:00:18.449009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:31.028 [2024-12-05 10:00:18.449025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:31.028 [2024-12-05 10:00:18.449037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:31.028 [2024-12-05 10:00:18.449047] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:31.028 [2024-12-05 10:00:18.449054] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:31.028 [2024-12-05 10:00:18.449060] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:31.028 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.287 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:32.222 [2024-12-05 10:00:19.789375] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:28:32.222 [2024-12-05 10:00:19.789411] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:28:32.222 [2024-12-05 10:00:19.789446] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:32.481 [2024-12-05 10:00:19.875507] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:28:32.481 [2024-12-05 10:00:19.933945] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:28:32.481 [2024-12-05 10:00:19.934867] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x61500002c680:1 started. 00:28:32.481 [2024-12-05 10:00:19.937407] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:28:32.481 [2024-12-05 10:00:19.937451] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:32.481 [2024-12-05 10:00:19.939597] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x61500002c680 was disconnected and freed. delete nvme_qpair. 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:32.481 2024/12/05 10:00:19 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:28:32.481 request: 00:28:32.481 { 00:28:32.481 "method": "bdev_nvme_start_discovery", 00:28:32.481 "params": { 00:28:32.481 "name": "nvme", 00:28:32.481 "trtype": "tcp", 00:28:32.481 "traddr": "10.0.0.3", 00:28:32.481 "adrfam": "ipv4", 00:28:32.481 "trsvcid": "8009", 00:28:32.481 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:32.481 "wait_for_attach": true 00:28:32.481 } 00:28:32.481 } 00:28:32.481 Got JSON-RPC error response 00:28:32.481 GoRPCClient: error on JSON-RPC call 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:32.481 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:32.481 2024/12/05 10:00:20 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:28:32.481 request: 00:28:32.481 { 00:28:32.481 "method": "bdev_nvme_start_discovery", 00:28:32.481 "params": { 00:28:32.481 "name": "nvme_second", 00:28:32.481 "trtype": "tcp", 00:28:32.481 "traddr": "10.0.0.3", 00:28:32.481 "adrfam": "ipv4", 00:28:32.481 "trsvcid": "8009", 00:28:32.481 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:32.481 "wait_for_attach": true 00:28:32.481 } 00:28:32.481 } 00:28:32.481 Got JSON-RPC error response 00:28:32.481 GoRPCClient: error on JSON-RPC call 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:32.481 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:32.482 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:32.482 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.482 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:32.482 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.740 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:28:32.740 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:28:32.740 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:32.740 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:32.740 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.740 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:32.740 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:32.740 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:32.740 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.740 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:32.740 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:32.741 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:32.741 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:32.741 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:32.741 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:32.741 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:32.741 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:32.741 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:32.741 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.741 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:33.676 [2024-12-05 10:00:21.201911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.676 [2024-12-05 10:00:21.201995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c900 with addr=10.0.0.3, port=8010 00:28:33.676 [2024-12-05 10:00:21.202053] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:33.676 [2024-12-05 10:00:21.202085] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:33.676 [2024-12-05 10:00:21.202098] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:28:34.611 [2024-12-05 10:00:22.201868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.611 [2024-12-05 10:00:22.201913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002cb80 with addr=10.0.0.3, port=8010 00:28:34.611 [2024-12-05 10:00:22.201970] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:34.611 [2024-12-05 10:00:22.201984] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:34.611 [2024-12-05 10:00:22.201995] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:28:35.986 [2024-12-05 10:00:23.201716] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:28:35.986 2024/12/05 10:00:23 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:28:35.986 request: 00:28:35.986 { 00:28:35.986 "method": "bdev_nvme_start_discovery", 00:28:35.986 "params": { 00:28:35.986 "name": "nvme_second", 00:28:35.986 "trtype": "tcp", 00:28:35.986 "traddr": "10.0.0.3", 00:28:35.986 "adrfam": "ipv4", 00:28:35.986 "trsvcid": "8010", 00:28:35.986 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:35.986 "wait_for_attach": false, 00:28:35.986 "attach_timeout_ms": 3000 00:28:35.986 } 00:28:35.986 } 00:28:35.986 Got JSON-RPC error response 00:28:35.986 GoRPCClient: error on JSON-RPC call 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 99562 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:35.986 rmmod nvme_tcp 00:28:35.986 rmmod nvme_fabrics 00:28:35.986 rmmod nvme_keyring 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 99511 ']' 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 99511 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 99511 ']' 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 99511 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99511 00:28:35.986 killing process with pid 99511 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99511' 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 99511 00:28:35.986 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 99511 00:28:36.921 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:36.921 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:36.921 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:36.921 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:28:36.921 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:28:36.921 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:36.921 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:28:36.921 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:36.921 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:36.921 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:36.921 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:36.921 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:36.921 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:36.921 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:36.921 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:36.922 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:36.922 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:36.922 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:36.922 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:36.922 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:36.922 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:36.922 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:36.922 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:36.922 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.922 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.922 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:28:37.181 00:28:37.181 real 0m12.192s 00:28:37.181 user 0m23.325s 00:28:37.181 sys 0m1.912s 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:37.181 ************************************ 00:28:37.181 END TEST nvmf_host_discovery 00:28:37.181 ************************************ 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.181 ************************************ 00:28:37.181 START TEST nvmf_host_multipath_status 00:28:37.181 ************************************ 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:37.181 * Looking for test storage... 00:28:37.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.181 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:28:37.440 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:28:37.440 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.440 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:28:37.440 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.440 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.440 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.440 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:28:37.440 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.440 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:37.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.440 --rc genhtml_branch_coverage=1 00:28:37.440 --rc genhtml_function_coverage=1 00:28:37.440 --rc genhtml_legend=1 00:28:37.440 --rc geninfo_all_blocks=1 00:28:37.440 --rc geninfo_unexecuted_blocks=1 00:28:37.441 00:28:37.441 ' 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:37.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.441 --rc genhtml_branch_coverage=1 00:28:37.441 --rc genhtml_function_coverage=1 00:28:37.441 --rc genhtml_legend=1 00:28:37.441 --rc geninfo_all_blocks=1 00:28:37.441 --rc geninfo_unexecuted_blocks=1 00:28:37.441 00:28:37.441 ' 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:37.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.441 --rc genhtml_branch_coverage=1 00:28:37.441 --rc genhtml_function_coverage=1 00:28:37.441 --rc genhtml_legend=1 00:28:37.441 --rc geninfo_all_blocks=1 00:28:37.441 --rc geninfo_unexecuted_blocks=1 00:28:37.441 00:28:37.441 ' 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:37.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.441 --rc genhtml_branch_coverage=1 00:28:37.441 --rc genhtml_function_coverage=1 00:28:37.441 --rc genhtml_legend=1 00:28:37.441 --rc geninfo_all_blocks=1 00:28:37.441 --rc geninfo_unexecuted_blocks=1 00:28:37.441 00:28:37.441 ' 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:37.441 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:37.441 Cannot find device "nvmf_init_br" 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:37.441 Cannot find device "nvmf_init_br2" 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:37.441 Cannot find device "nvmf_tgt_br" 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:37.441 Cannot find device "nvmf_tgt_br2" 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:37.441 Cannot find device "nvmf_init_br" 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:37.441 Cannot find device "nvmf_init_br2" 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:37.441 Cannot find device "nvmf_tgt_br" 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:37.441 Cannot find device "nvmf_tgt_br2" 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:37.441 Cannot find device "nvmf_br" 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:37.441 Cannot find device "nvmf_init_if" 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:37.441 Cannot find device "nvmf_init_if2" 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:37.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:37.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:37.441 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:28:37.441 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:37.441 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:37.441 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:37.441 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:37.441 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:37.442 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:37.442 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:37.442 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:37.700 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:37.700 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:28:37.700 00:28:37.700 --- 10.0.0.3 ping statistics --- 00:28:37.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.700 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:37.700 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:37.700 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:28:37.700 00:28:37.700 --- 10.0.0.4 ping statistics --- 00:28:37.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.700 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:37.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:28:37.700 00:28:37.700 --- 10.0.0.1 ping statistics --- 00:28:37.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.700 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:28:37.700 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:37.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:28:37.700 00:28:37.700 --- 10.0.0.2 ping statistics --- 00:28:37.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.700 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=100111 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 100111 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 100111 ']' 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.701 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:37.958 [2024-12-05 10:00:25.460768] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:28:37.958 [2024-12-05 10:00:25.460919] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.215 [2024-12-05 10:00:25.627067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:38.216 [2024-12-05 10:00:25.728842] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.216 [2024-12-05 10:00:25.728908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.216 [2024-12-05 10:00:25.728961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:38.216 [2024-12-05 10:00:25.728986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.216 [2024-12-05 10:00:25.728999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.216 [2024-12-05 10:00:25.730838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.216 [2024-12-05 10:00:25.730861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.147 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:39.147 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:28:39.148 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:39.148 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:39.148 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:39.148 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.148 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=100111 00:28:39.148 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:39.148 [2024-12-05 10:00:26.740674] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.148 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:39.414 Malloc0 00:28:39.672 10:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:28:39.672 10:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:40.240 10:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:40.240 [2024-12-05 10:00:27.837821] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:40.240 10:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:28:40.500 [2024-12-05 10:00:28.058069] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:28:40.500 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:28:40.500 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=100220 00:28:40.500 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:40.500 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 100220 /var/tmp/bdevperf.sock 00:28:40.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:40.500 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 100220 ']' 00:28:40.500 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:40.500 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:40.500 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:40.500 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:40.500 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:41.437 10:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:41.437 10:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:28:41.437 10:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:41.696 10:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:42.264 Nvme0n1 00:28:42.264 10:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:42.523 Nvme0n1 00:28:42.523 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:28:42.523 10:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:28:44.423 10:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:28:44.423 10:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:28:44.682 10:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:28:44.940 10:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:28:46.316 10:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:28:46.316 10:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:46.316 10:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.316 10:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:46.316 10:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:46.316 10:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:46.316 10:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.316 10:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:46.575 10:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:46.575 10:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:46.575 10:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.575 10:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:46.833 10:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:46.833 10:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:46.833 10:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.833 10:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:47.091 10:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:47.091 10:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:47.091 10:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:47.091 10:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:47.349 10:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:47.349 10:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:47.349 10:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:47.349 10:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:47.609 10:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:47.609 10:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:28:47.609 10:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:47.866 10:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:28:48.124 10:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:28:49.059 10:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:28:49.059 10:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:49.059 10:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.059 10:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:49.317 10:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:49.317 10:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:49.317 10:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.317 10:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:49.576 10:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:49.576 10:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:49.576 10:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.576 10:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:49.835 10:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:49.835 10:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:49.835 10:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.835 10:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:50.093 10:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:50.093 10:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:50.093 10:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:50.093 10:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:50.352 10:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:50.352 10:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:50.352 10:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:50.352 10:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:50.611 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:50.611 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:28:50.611 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:50.869 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:28:51.128 10:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:28:52.063 10:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:28:52.063 10:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:52.063 10:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:52.063 10:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:52.322 10:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:52.322 10:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:52.322 10:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:52.322 10:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:52.581 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:52.581 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:52.581 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:52.581 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:52.840 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:52.840 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:52.840 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:52.840 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:53.098 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:53.098 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:53.098 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:53.098 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:53.356 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:53.356 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:53.356 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:53.356 10:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:53.614 10:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:53.614 10:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:28:53.614 10:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:53.872 10:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:28:54.453 10:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:28:55.425 10:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:28:55.425 10:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:55.425 10:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:55.425 10:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:55.684 10:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:55.684 10:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:55.684 10:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:55.684 10:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:55.942 10:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:55.942 10:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:55.942 10:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:55.942 10:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:56.200 10:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:56.200 10:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:56.200 10:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:56.200 10:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:56.458 10:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:56.458 10:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:56.458 10:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:56.458 10:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:56.715 10:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:56.715 10:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:56.715 10:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:56.715 10:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:56.973 10:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:56.973 10:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:56.973 10:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:28:57.288 10:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:28:57.546 10:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:58.477 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:58.477 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:58.477 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:58.477 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:58.734 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:58.734 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:58.734 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:58.734 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:58.992 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:58.992 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:58.992 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:58.992 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:59.250 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:59.250 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:59.250 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:59.250 10:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:59.816 10:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:59.816 10:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:59.816 10:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:59.816 10:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:59.816 10:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:59.816 10:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:59.816 10:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:59.816 10:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:00.076 10:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:00.076 10:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:29:00.076 10:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:29:00.334 10:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:29:00.593 10:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:29:01.971 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:29:01.971 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:01.971 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:01.971 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:01.971 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:01.971 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:01.971 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:01.971 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:02.230 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:02.230 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:02.230 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:02.230 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:02.489 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:02.489 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:02.489 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:02.489 10:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:02.747 10:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:02.747 10:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:29:02.748 10:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:02.748 10:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:03.006 10:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:03.006 10:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:03.006 10:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:03.006 10:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:03.264 10:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:03.264 10:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:29:03.522 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:29:03.522 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:29:03.781 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:29:04.040 10:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:29:04.977 10:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:29:04.977 10:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:04.977 10:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:04.977 10:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:05.236 10:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:05.236 10:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:05.236 10:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:05.236 10:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:05.494 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:05.494 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:05.494 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:05.494 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:06.063 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:06.063 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:06.063 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:06.063 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:06.063 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:06.063 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:06.063 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:06.063 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:06.322 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:06.322 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:06.322 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:06.322 10:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:06.581 10:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:06.581 10:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:29:06.581 10:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:29:06.841 10:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:29:07.099 10:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:29:08.040 10:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:29:08.040 10:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:08.040 10:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:08.040 10:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:08.607 10:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:08.607 10:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:08.607 10:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:08.607 10:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:08.866 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:08.866 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:08.866 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:08.866 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:08.866 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:08.866 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:08.866 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:08.866 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:09.124 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:09.124 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:09.124 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:09.125 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:09.383 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:09.383 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:09.383 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:09.383 10:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:09.952 10:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:09.952 10:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:29:09.952 10:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:29:09.952 10:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:29:10.211 10:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:29:11.147 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:29:11.147 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:11.147 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:11.147 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:11.405 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:11.405 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:11.405 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:11.405 10:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:11.664 10:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:11.664 10:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:11.664 10:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:11.664 10:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:11.923 10:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:11.923 10:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:11.923 10:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:11.923 10:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:12.181 10:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:12.181 10:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:12.181 10:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:12.181 10:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:12.748 10:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:12.748 10:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:12.748 10:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:12.748 10:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:12.748 10:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:12.748 10:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:29:12.748 10:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:29:13.006 10:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:29:13.266 10:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:29:14.643 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:29:14.643 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:14.643 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:14.643 10:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:14.643 10:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:14.643 10:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:14.643 10:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:14.643 10:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:14.902 10:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:14.902 10:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:14.902 10:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:14.902 10:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:15.161 10:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:15.161 10:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:15.161 10:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:15.161 10:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:15.420 10:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:15.420 10:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:15.420 10:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:15.420 10:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:15.680 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:15.680 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:15.680 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:15.680 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:15.939 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:15.939 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 100220 00:29:15.939 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 100220 ']' 00:29:15.939 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 100220 00:29:15.939 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:29:15.939 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.939 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100220 00:29:15.939 killing process with pid 100220 00:29:15.939 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:29:15.939 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:29:15.939 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100220' 00:29:15.939 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 100220 00:29:15.939 10:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 100220 00:29:15.939 { 00:29:15.939 "results": [ 00:29:15.939 { 00:29:15.939 "job": "Nvme0n1", 00:29:15.939 "core_mask": "0x4", 00:29:15.939 "workload": "verify", 00:29:15.939 "status": "terminated", 00:29:15.939 "verify_range": { 00:29:15.939 "start": 0, 00:29:15.939 "length": 16384 00:29:15.939 }, 00:29:15.939 "queue_depth": 128, 00:29:15.939 "io_size": 4096, 00:29:15.939 "runtime": 33.34401, 00:29:15.939 "iops": 7614.950931216731, 00:29:15.939 "mibps": 29.745902075065356, 00:29:15.939 "io_failed": 0, 00:29:15.939 "io_timeout": 0, 00:29:15.939 "avg_latency_us": 16782.372247502095, 00:29:15.939 "min_latency_us": 195.4909090909091, 00:29:15.939 "max_latency_us": 4026531.84 00:29:15.939 } 00:29:15.939 ], 00:29:15.939 "core_count": 1 00:29:15.939 } 00:29:16.877 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 100220 00:29:16.877 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:16.877 [2024-12-05 10:00:28.198637] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:29:16.877 [2024-12-05 10:00:28.198832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100220 ] 00:29:16.877 [2024-12-05 10:00:28.388573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.877 [2024-12-05 10:00:28.554636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:16.877 Running I/O for 90 seconds... 00:29:16.877 8763.00 IOPS, 34.23 MiB/s [2024-12-05T10:01:04.506Z] 8936.50 IOPS, 34.91 MiB/s [2024-12-05T10:01:04.506Z] 8983.33 IOPS, 35.09 MiB/s [2024-12-05T10:01:04.506Z] 8884.50 IOPS, 34.71 MiB/s [2024-12-05T10:01:04.506Z] 8759.80 IOPS, 34.22 MiB/s [2024-12-05T10:01:04.506Z] 8702.33 IOPS, 33.99 MiB/s [2024-12-05T10:01:04.506Z] 8610.00 IOPS, 33.63 MiB/s [2024-12-05T10:01:04.506Z] 8567.38 IOPS, 33.47 MiB/s [2024-12-05T10:01:04.506Z] 8554.89 IOPS, 33.42 MiB/s [2024-12-05T10:01:04.506Z] 8549.50 IOPS, 33.40 MiB/s [2024-12-05T10:01:04.506Z] 8544.18 IOPS, 33.38 MiB/s [2024-12-05T10:01:04.506Z] 8534.42 IOPS, 33.34 MiB/s [2024-12-05T10:01:04.506Z] 8533.38 IOPS, 33.33 MiB/s [2024-12-05T10:01:04.506Z] 8529.71 IOPS, 33.32 MiB/s [2024-12-05T10:01:04.506Z] [2024-12-05 10:00:44.782792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.877 [2024-12-05 10:00:44.782879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:16.877 [2024-12-05 10:00:44.783008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.877 [2024-12-05 10:00:44.783041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:16.877 [2024-12-05 10:00:44.783073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.877 [2024-12-05 10:00:44.783095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:16.877 [2024-12-05 10:00:44.783122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.877 [2024-12-05 10:00:44.783142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:16.877 [2024-12-05 10:00:44.783170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.877 [2024-12-05 10:00:44.783190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:16.877 [2024-12-05 10:00:44.783216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.877 [2024-12-05 10:00:44.783236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:16.877 [2024-12-05 10:00:44.783263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.877 [2024-12-05 10:00:44.783282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:16.877 [2024-12-05 10:00:44.783313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.877 [2024-12-05 10:00:44.783332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:16.877 [2024-12-05 10:00:44.783358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.877 [2024-12-05 10:00:44.783377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:16.877 [2024-12-05 10:00:44.783542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.877 [2024-12-05 10:00:44.783567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:16.877 [2024-12-05 10:00:44.783595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.877 [2024-12-05 10:00:44.783615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:16.877 [2024-12-05 10:00:44.783642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.877 [2024-12-05 10:00:44.783662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:16.877 [2024-12-05 10:00:44.783687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.877 [2024-12-05 10:00:44.783707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:16.877 [2024-12-05 10:00:44.783733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.877 [2024-12-05 10:00:44.783752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:16.877 [2024-12-05 10:00:44.783779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.877 [2024-12-05 10:00:44.783801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:16.877 [2024-12-05 10:00:44.783826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.783845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.783871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.783890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.783917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.783952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.783981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.784007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.784034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.784053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.784079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.784098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.784125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.784157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.784186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.784206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.784232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.784251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.784277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.784296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.784324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.784344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.784370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.784389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.784415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.784435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.784462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.784481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.784507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.784526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.784552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.784609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.784639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.784665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.784694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.784714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.784741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.784772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.785114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.785147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.785185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.785207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.785237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.785258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.785288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.785308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.785345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.785364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.785393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.785413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.785442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.785462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.785491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.785510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.785538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.785558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.785585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.785605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.785633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.785653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.785681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.785701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.785748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.785770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.785799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.785819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.785848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.785868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.785896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.785916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.785961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.785984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.786013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.786033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.786061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.786081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.786109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.786129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.786157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.786177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.786206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.878 [2024-12-05 10:00:44.786227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:16.878 [2024-12-05 10:00:44.786292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.786315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.786344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.786365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.786405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.786427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.786458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.786478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.786506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.786533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.786562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.786581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.786610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.786630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.786676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.786696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.786723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.786743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.787983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.788013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.788142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.788170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.788207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.788228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.788260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.788281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.788312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.788332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.788363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.788405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.788440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.788461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.788493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.788514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.788546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.788566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.788597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.788617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.788648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.788668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.788700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.788720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.788751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.788771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.788802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.788822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.788854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.788874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.788905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.788924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.788974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.789003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.789035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.789064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.789099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.789119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.789151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.789171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.789203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.789223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.789254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.789281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.789314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.789335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.789378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.789398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.789430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.789450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:00:44.789483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:00:44.789503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:16.879 8309.87 IOPS, 32.46 MiB/s [2024-12-05T10:01:04.508Z] 7790.50 IOPS, 30.43 MiB/s [2024-12-05T10:01:04.508Z] 7332.24 IOPS, 28.64 MiB/s [2024-12-05T10:01:04.508Z] 6924.89 IOPS, 27.05 MiB/s [2024-12-05T10:01:04.508Z] 6719.53 IOPS, 26.25 MiB/s [2024-12-05T10:01:04.508Z] 6792.45 IOPS, 26.53 MiB/s [2024-12-05T10:01:04.508Z] 6855.43 IOPS, 26.78 MiB/s [2024-12-05T10:01:04.508Z] 6955.86 IOPS, 27.17 MiB/s [2024-12-05T10:01:04.508Z] 7055.17 IOPS, 27.56 MiB/s [2024-12-05T10:01:04.508Z] 7168.79 IOPS, 28.00 MiB/s [2024-12-05T10:01:04.508Z] 7231.28 IOPS, 28.25 MiB/s [2024-12-05T10:01:04.508Z] 7272.92 IOPS, 28.41 MiB/s [2024-12-05T10:01:04.508Z] 7303.11 IOPS, 28.53 MiB/s [2024-12-05T10:01:04.508Z] 7352.11 IOPS, 28.72 MiB/s [2024-12-05T10:01:04.508Z] 7422.41 IOPS, 28.99 MiB/s [2024-12-05T10:01:04.508Z] 7492.73 IOPS, 29.27 MiB/s [2024-12-05T10:01:04.508Z] [2024-12-05 10:01:00.818998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.879 [2024-12-05 10:01:00.819080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:01:00.819151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:01:00.819179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:01:00.819210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:01:00.819280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:01:00.819327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:01:00.819360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:01:00.819386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:01:00.819405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:01:00.821504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.879 [2024-12-05 10:01:00.821548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:16.879 [2024-12-05 10:01:00.821586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.821607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.821644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.821663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.821691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.821711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.821742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.821762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.821788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.821807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.821834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.821853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.821880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.821899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.821925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.821972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.822000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.822033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.822063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.822083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.822109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.822129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.822155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.822176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.822202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.822221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.822247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.822294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.822323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.822344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.822371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.822390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.822417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.822436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.822463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.822482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.822510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.822539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.822598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.822617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.822643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.822662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.822712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.822732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.822757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.822776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.822801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.822819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.822844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.822862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.822887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.822905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.822953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.822972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.823010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.823033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.823062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.823083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.824611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.824671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.824709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.824731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.824760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.824780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.824807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.824826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.824867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.824904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.824932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.824967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.824997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.825016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.825042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.825062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.825089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.825108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.825134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.825153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.825179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.825198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.825224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.825243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.825271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.880 [2024-12-05 10:01:00.825292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.825318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.825337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.825363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.825382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.825408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.825426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:16.880 [2024-12-05 10:01:00.825482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.880 [2024-12-05 10:01:00.825503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:16.881 [2024-12-05 10:01:00.825530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.881 [2024-12-05 10:01:00.825549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:16.881 [2024-12-05 10:01:00.825576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.881 [2024-12-05 10:01:00.825595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:16.881 [2024-12-05 10:01:00.825621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.881 [2024-12-05 10:01:00.825640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.881 [2024-12-05 10:01:00.825666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.881 [2024-12-05 10:01:00.825685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:16.881 [2024-12-05 10:01:00.825711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.881 [2024-12-05 10:01:00.825730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:16.881 [2024-12-05 10:01:00.825756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.881 [2024-12-05 10:01:00.825775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:16.881 [2024-12-05 10:01:00.825801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.881 [2024-12-05 10:01:00.825822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:16.881 [2024-12-05 10:01:00.825848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.881 [2024-12-05 10:01:00.825867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:16.881 [2024-12-05 10:01:00.825892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.881 [2024-12-05 10:01:00.825912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:16.881 [2024-12-05 10:01:00.825966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.881 [2024-12-05 10:01:00.825989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:16.881 [2024-12-05 10:01:00.826015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.881 [2024-12-05 10:01:00.826035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:16.881 [2024-12-05 10:01:00.826062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.881 [2024-12-05 10:01:00.826091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:16.881 7549.13 IOPS, 29.49 MiB/s [2024-12-05T10:01:04.510Z] 7581.94 IOPS, 29.62 MiB/s [2024-12-05T10:01:04.510Z] 7605.58 IOPS, 29.71 MiB/s [2024-12-05T10:01:04.510Z] Received shutdown signal, test time was about 33.344772 seconds 00:29:16.881 00:29:16.881 Latency(us) 00:29:16.881 [2024-12-05T10:01:04.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.881 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:16.881 Verification LBA range: start 0x0 length 0x4000 00:29:16.881 Nvme0n1 : 33.34 7614.95 29.75 0.00 0.00 16782.37 195.49 4026531.84 00:29:16.881 [2024-12-05T10:01:04.510Z] =================================================================================================================== 00:29:16.881 [2024-12-05T10:01:04.510Z] Total : 7614.95 29.75 0.00 0.00 16782.37 195.49 4026531.84 00:29:16.881 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:17.139 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:29:17.139 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:17.139 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:29:17.139 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:17.139 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:29:17.399 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:17.399 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:29:17.399 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:17.399 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:17.399 rmmod nvme_tcp 00:29:17.399 rmmod nvme_fabrics 00:29:17.399 rmmod nvme_keyring 00:29:17.399 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:17.399 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:29:17.399 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:29:17.399 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 100111 ']' 00:29:17.399 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 100111 00:29:17.399 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 100111 ']' 00:29:17.399 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 100111 00:29:17.399 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:29:17.399 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:17.399 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100111 00:29:17.399 killing process with pid 100111 00:29:17.399 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:17.399 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:17.399 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100111' 00:29:17.399 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 100111 00:29:17.399 10:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 100111 00:29:18.782 10:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:18.782 10:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:18.782 10:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:18.782 10:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:29:18.782 10:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:29:18.782 10:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:18.782 10:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:29:18.782 10:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:18.782 10:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:18.782 10:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:29:18.782 ************************************ 00:29:18.782 END TEST nvmf_host_multipath_status 00:29:18.782 ************************************ 00:29:18.782 00:29:18.782 real 0m41.619s 00:29:18.782 user 2m13.803s 00:29:18.782 sys 0m9.107s 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.782 ************************************ 00:29:18.782 START TEST nvmf_discovery_remove_ifc 00:29:18.782 ************************************ 00:29:18.782 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:18.782 * Looking for test storage... 00:29:18.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:18.783 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:18.783 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:29:18.783 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:19.042 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:19.042 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:19.042 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:19.042 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:19.042 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:29:19.042 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:29:19.042 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:29:19.042 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:29:19.042 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:29:19.042 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:29:19.042 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:29:19.042 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:19.042 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:29:19.042 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:29:19.042 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:19.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.043 --rc genhtml_branch_coverage=1 00:29:19.043 --rc genhtml_function_coverage=1 00:29:19.043 --rc genhtml_legend=1 00:29:19.043 --rc geninfo_all_blocks=1 00:29:19.043 --rc geninfo_unexecuted_blocks=1 00:29:19.043 00:29:19.043 ' 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:19.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.043 --rc genhtml_branch_coverage=1 00:29:19.043 --rc genhtml_function_coverage=1 00:29:19.043 --rc genhtml_legend=1 00:29:19.043 --rc geninfo_all_blocks=1 00:29:19.043 --rc geninfo_unexecuted_blocks=1 00:29:19.043 00:29:19.043 ' 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:19.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.043 --rc genhtml_branch_coverage=1 00:29:19.043 --rc genhtml_function_coverage=1 00:29:19.043 --rc genhtml_legend=1 00:29:19.043 --rc geninfo_all_blocks=1 00:29:19.043 --rc geninfo_unexecuted_blocks=1 00:29:19.043 00:29:19.043 ' 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:19.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.043 --rc genhtml_branch_coverage=1 00:29:19.043 --rc genhtml_function_coverage=1 00:29:19.043 --rc genhtml_legend=1 00:29:19.043 --rc geninfo_all_blocks=1 00:29:19.043 --rc geninfo_unexecuted_blocks=1 00:29:19.043 00:29:19.043 ' 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:19.043 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:19.043 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:19.044 Cannot find device "nvmf_init_br" 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:19.044 Cannot find device "nvmf_init_br2" 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:19.044 Cannot find device "nvmf_tgt_br" 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:19.044 Cannot find device "nvmf_tgt_br2" 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:19.044 Cannot find device "nvmf_init_br" 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:19.044 Cannot find device "nvmf_init_br2" 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:19.044 Cannot find device "nvmf_tgt_br" 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:19.044 Cannot find device "nvmf_tgt_br2" 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:19.044 Cannot find device "nvmf_br" 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:19.044 Cannot find device "nvmf_init_if" 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:19.044 Cannot find device "nvmf_init_if2" 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:19.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:19.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:19.044 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:19.303 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:19.304 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:19.304 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:29:19.304 00:29:19.304 --- 10.0.0.3 ping statistics --- 00:29:19.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.304 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:19.304 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:19.304 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:29:19.304 00:29:19.304 --- 10.0.0.4 ping statistics --- 00:29:19.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.304 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:19.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:29:19.304 00:29:19.304 --- 10.0.0.1 ping statistics --- 00:29:19.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.304 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:19.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:29:19.304 00:29:19.304 --- 10.0.0.2 ping statistics --- 00:29:19.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.304 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=101588 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 101588 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 101588 ']' 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.304 10:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:19.563 [2024-12-05 10:01:07.045267] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:29:19.563 [2024-12-05 10:01:07.045449] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.821 [2024-12-05 10:01:07.238914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.821 [2024-12-05 10:01:07.385866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.821 [2024-12-05 10:01:07.385971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.821 [2024-12-05 10:01:07.385999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.821 [2024-12-05 10:01:07.386036] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.821 [2024-12-05 10:01:07.386055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.821 [2024-12-05 10:01:07.387625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.389 10:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:20.389 10:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:29:20.389 10:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:20.389 10:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:20.389 10:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:20.389 10:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.389 10:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:20.389 10:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.389 10:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:20.389 [2024-12-05 10:01:07.961762] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.389 [2024-12-05 10:01:07.969998] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:29:20.389 null0 00:29:20.389 [2024-12-05 10:01:08.001855] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:20.654 10:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.654 10:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=101640 00:29:20.654 10:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:20.654 10:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 101640 /tmp/host.sock 00:29:20.654 10:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 101640 ']' 00:29:20.654 10:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:29:20.654 10:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:20.654 10:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:20.654 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:20.654 10:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:20.654 10:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:20.654 [2024-12-05 10:01:08.115607] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:29:20.654 [2024-12-05 10:01:08.115751] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101640 ] 00:29:20.962 [2024-12-05 10:01:08.286990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.962 [2024-12-05 10:01:08.443259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.560 10:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.560 10:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:29:21.560 10:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:21.560 10:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:29:21.560 10:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.560 10:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:21.560 10:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.560 10:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:29:21.560 10:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.560 10:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:21.820 10:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.820 10:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:29:21.820 10:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.820 10:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:22.756 [2024-12-05 10:01:10.366626] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:29:22.756 [2024-12-05 10:01:10.366661] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:29:22.756 [2024-12-05 10:01:10.366698] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:23.015 [2024-12-05 10:01:10.452756] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:29:23.015 [2024-12-05 10:01:10.515325] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:29:23.015 [2024-12-05 10:01:10.516439] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:29:23.015 [2024-12-05 10:01:10.518334] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:23.015 [2024-12-05 10:01:10.518401] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:23.015 [2024-12-05 10:01:10.518460] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:23.015 [2024-12-05 10:01:10.518485] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:29:23.015 [2024-12-05 10:01:10.518519] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:23.015 [2024-12-05 10:01:10.525177] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:23.015 10:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:24.393 10:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:24.393 10:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:24.393 10:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:24.393 10:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.393 10:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:24.393 10:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:24.393 10:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:24.393 10:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.393 10:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:24.393 10:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:25.329 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:25.329 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:25.329 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.329 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:25.329 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:25.329 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:25.329 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:25.329 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.329 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:25.329 10:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:26.265 10:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:26.265 10:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:26.265 10:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.265 10:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:26.265 10:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:26.265 10:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:26.265 10:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:26.266 10:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.266 10:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:26.266 10:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:27.202 10:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:27.202 10:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:27.202 10:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:27.202 10:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.202 10:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:27.202 10:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:27.202 10:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:27.461 10:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.461 10:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:27.461 10:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:28.394 10:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:28.394 10:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:28.394 10:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:28.394 10:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.394 10:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:28.394 10:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:28.394 10:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:28.394 10:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.394 10:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:28.394 10:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:28.394 [2024-12-05 10:01:15.946030] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:29:28.394 [2024-12-05 10:01:15.946109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:28.394 [2024-12-05 10:01:15.946137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.394 [2024-12-05 10:01:15.946155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:28.394 [2024-12-05 10:01:15.946167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.394 [2024-12-05 10:01:15.946180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:28.394 [2024-12-05 10:01:15.946191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.394 [2024-12-05 10:01:15.946204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:28.394 [2024-12-05 10:01:15.946215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.394 [2024-12-05 10:01:15.946228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:28.394 [2024-12-05 10:01:15.946240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.394 [2024-12-05 10:01:15.946294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:29:28.394 [2024-12-05 10:01:15.956020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:29:28.394 [2024-12-05 10:01:15.966035] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:28.394 [2024-12-05 10:01:15.966066] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:28.394 [2024-12-05 10:01:15.966076] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:28.394 [2024-12-05 10:01:15.966085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:28.394 [2024-12-05 10:01:15.966210] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:29.329 10:01:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:29.329 10:01:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:29.329 10:01:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:29.329 10:01:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.329 10:01:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:29.329 10:01:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:29.329 10:01:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:29.587 [2024-12-05 10:01:16.999059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:29.587 [2024-12-05 10:01:16.999181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:29:29.587 [2024-12-05 10:01:16.999225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:29:29.587 [2024-12-05 10:01:16.999299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:29:29.587 [2024-12-05 10:01:17.000344] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:29:29.587 [2024-12-05 10:01:17.000454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:29.588 [2024-12-05 10:01:17.000485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:29.588 [2024-12-05 10:01:17.000512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:29.588 [2024-12-05 10:01:17.000553] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:29.588 [2024-12-05 10:01:17.000572] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:29.588 [2024-12-05 10:01:17.000588] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:29.588 [2024-12-05 10:01:17.000611] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:29.588 [2024-12-05 10:01:17.000627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:29.588 10:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.588 10:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:29.588 10:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:30.522 [2024-12-05 10:01:18.000742] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:30.522 [2024-12-05 10:01:18.000788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:30.522 [2024-12-05 10:01:18.000815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:30.522 [2024-12-05 10:01:18.000843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:30.522 [2024-12-05 10:01:18.000856] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:29:30.522 [2024-12-05 10:01:18.000868] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:30.522 [2024-12-05 10:01:18.000877] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:30.522 [2024-12-05 10:01:18.000885] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:30.522 [2024-12-05 10:01:18.000936] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:29:30.523 [2024-12-05 10:01:18.001002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.523 [2024-12-05 10:01:18.001022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.523 [2024-12-05 10:01:18.001054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.523 [2024-12-05 10:01:18.001081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.523 [2024-12-05 10:01:18.001110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.523 [2024-12-05 10:01:18.001122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.523 [2024-12-05 10:01:18.001135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.523 [2024-12-05 10:01:18.001147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.523 [2024-12-05 10:01:18.001161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.523 [2024-12-05 10:01:18.001173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.523 [2024-12-05 10:01:18.001200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:29:30.523 [2024-12-05 10:01:18.001462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:29:30.523 [2024-12-05 10:01:18.002492] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:29:30.523 [2024-12-05 10:01:18.002543] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:30.523 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.781 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:30.781 10:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:31.715 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:31.715 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:31.715 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:31.715 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:31.715 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:31.715 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.715 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:31.715 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.715 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:31.715 10:01:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:32.652 [2024-12-05 10:01:20.008500] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:29:32.652 [2024-12-05 10:01:20.008554] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:29:32.652 [2024-12-05 10:01:20.008584] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:32.652 [2024-12-05 10:01:20.094680] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:29:32.652 [2024-12-05 10:01:20.157245] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:29:32.652 [2024-12-05 10:01:20.158376] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x61500002c180:1 started. 00:29:32.652 [2024-12-05 10:01:20.160493] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:32.652 [2024-12-05 10:01:20.160571] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:32.652 [2024-12-05 10:01:20.160631] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:32.652 [2024-12-05 10:01:20.160655] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:29:32.652 [2024-12-05 10:01:20.160674] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:29:32.652 [2024-12-05 10:01:20.167034] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x61500002c180 was disconnected and freed. delete nvme_qpair. 00:29:32.652 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:32.652 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:32.652 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.652 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:32.652 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:32.652 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:32.652 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:32.652 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.911 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:32.911 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:32.911 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 101640 00:29:32.911 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 101640 ']' 00:29:32.911 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 101640 00:29:32.911 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:29:32.911 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.911 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101640 00:29:32.911 killing process with pid 101640 00:29:32.911 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:32.911 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:32.911 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101640' 00:29:32.911 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 101640 00:29:32.911 10:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 101640 00:29:33.847 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:33.847 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:33.847 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:29:33.847 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:33.847 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:29:33.847 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:33.847 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:33.847 rmmod nvme_tcp 00:29:33.847 rmmod nvme_fabrics 00:29:33.847 rmmod nvme_keyring 00:29:33.847 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:33.847 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:29:33.847 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:29:33.847 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 101588 ']' 00:29:33.847 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 101588 00:29:33.847 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 101588 ']' 00:29:33.847 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 101588 00:29:33.847 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:29:33.847 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:33.847 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101588 00:29:33.847 killing process with pid 101588 00:29:33.848 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:33.848 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:33.848 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101588' 00:29:33.848 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 101588 00:29:33.848 10:01:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 101588 00:29:34.784 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:34.784 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:34.784 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:34.784 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:29:34.784 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:29:34.784 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:34.784 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:29:34.784 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:34.784 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:34.784 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:34.784 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:34.784 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:34.784 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:34.784 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:34.784 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:34.784 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:34.784 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:34.784 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:35.043 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:35.043 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:35.043 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:35.043 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:35.043 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:35.043 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.043 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.043 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.043 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:29:35.043 00:29:35.043 real 0m16.269s 00:29:35.043 user 0m27.953s 00:29:35.043 sys 0m1.933s 00:29:35.043 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:35.043 10:01:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:35.043 ************************************ 00:29:35.043 END TEST nvmf_discovery_remove_ifc 00:29:35.043 ************************************ 00:29:35.043 10:01:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:35.043 10:01:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:35.043 10:01:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:35.043 10:01:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.043 ************************************ 00:29:35.043 START TEST nvmf_identify_kernel_target 00:29:35.043 ************************************ 00:29:35.043 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:35.303 * Looking for test storage... 00:29:35.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:35.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.303 --rc genhtml_branch_coverage=1 00:29:35.303 --rc genhtml_function_coverage=1 00:29:35.303 --rc genhtml_legend=1 00:29:35.303 --rc geninfo_all_blocks=1 00:29:35.303 --rc geninfo_unexecuted_blocks=1 00:29:35.303 00:29:35.303 ' 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:35.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.303 --rc genhtml_branch_coverage=1 00:29:35.303 --rc genhtml_function_coverage=1 00:29:35.303 --rc genhtml_legend=1 00:29:35.303 --rc geninfo_all_blocks=1 00:29:35.303 --rc geninfo_unexecuted_blocks=1 00:29:35.303 00:29:35.303 ' 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:35.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.303 --rc genhtml_branch_coverage=1 00:29:35.303 --rc genhtml_function_coverage=1 00:29:35.303 --rc genhtml_legend=1 00:29:35.303 --rc geninfo_all_blocks=1 00:29:35.303 --rc geninfo_unexecuted_blocks=1 00:29:35.303 00:29:35.303 ' 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:35.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.303 --rc genhtml_branch_coverage=1 00:29:35.303 --rc genhtml_function_coverage=1 00:29:35.303 --rc genhtml_legend=1 00:29:35.303 --rc geninfo_all_blocks=1 00:29:35.303 --rc geninfo_unexecuted_blocks=1 00:29:35.303 00:29:35.303 ' 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:35.303 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:35.304 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:35.304 Cannot find device "nvmf_init_br" 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:35.304 Cannot find device "nvmf_init_br2" 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:35.304 Cannot find device "nvmf_tgt_br" 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:35.304 Cannot find device "nvmf_tgt_br2" 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:35.304 Cannot find device "nvmf_init_br" 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:35.304 Cannot find device "nvmf_init_br2" 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:35.304 Cannot find device "nvmf_tgt_br" 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:35.304 Cannot find device "nvmf_tgt_br2" 00:29:35.304 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:29:35.564 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:35.564 Cannot find device "nvmf_br" 00:29:35.564 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:29:35.564 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:35.564 Cannot find device "nvmf_init_if" 00:29:35.564 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:29:35.564 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:35.564 Cannot find device "nvmf_init_if2" 00:29:35.564 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:29:35.564 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:35.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:35.564 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:29:35.564 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:35.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:35.564 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:29:35.564 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:35.564 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:35.564 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:35.564 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:35.564 10:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:35.564 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:35.564 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:29:35.564 00:29:35.564 --- 10.0.0.3 ping statistics --- 00:29:35.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.564 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:29:35.564 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:35.564 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:35.564 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:29:35.564 00:29:35.564 --- 10.0.0.4 ping statistics --- 00:29:35.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.564 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:35.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:29:35.823 00:29:35.823 --- 10.0.0.1 ping statistics --- 00:29:35.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.823 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:35.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:29:35.823 00:29:35.823 --- 10.0.0.2 ping statistics --- 00:29:35.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.823 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:35.823 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:36.082 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:36.082 Waiting for block devices as requested 00:29:36.082 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:36.341 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:29:36.341 No valid GPT data, bailing 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:29:36.341 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:29:36.601 No valid GPT data, bailing 00:29:36.601 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:29:36.601 No valid GPT data, bailing 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:29:36.601 No valid GPT data, bailing 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -a 10.0.0.1 -t tcp -s 4420 00:29:36.601 00:29:36.601 Discovery Log Number of Records 2, Generation counter 2 00:29:36.601 =====Discovery Log Entry 0====== 00:29:36.601 trtype: tcp 00:29:36.601 adrfam: ipv4 00:29:36.601 subtype: current discovery subsystem 00:29:36.601 treq: not specified, sq flow control disable supported 00:29:36.601 portid: 1 00:29:36.601 trsvcid: 4420 00:29:36.601 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:36.601 traddr: 10.0.0.1 00:29:36.601 eflags: none 00:29:36.601 sectype: none 00:29:36.601 =====Discovery Log Entry 1====== 00:29:36.601 trtype: tcp 00:29:36.601 adrfam: ipv4 00:29:36.601 subtype: nvme subsystem 00:29:36.601 treq: not specified, sq flow control disable supported 00:29:36.601 portid: 1 00:29:36.601 trsvcid: 4420 00:29:36.601 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:36.601 traddr: 10.0.0.1 00:29:36.601 eflags: none 00:29:36.601 sectype: none 00:29:36.601 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:29:36.601 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:29:36.861 ===================================================== 00:29:36.861 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:36.861 ===================================================== 00:29:36.861 Controller Capabilities/Features 00:29:36.861 ================================ 00:29:36.861 Vendor ID: 0000 00:29:36.861 Subsystem Vendor ID: 0000 00:29:36.861 Serial Number: 5cb81d4d9c9757c470d5 00:29:36.861 Model Number: Linux 00:29:36.861 Firmware Version: 6.8.9-20 00:29:36.861 Recommended Arb Burst: 0 00:29:36.861 IEEE OUI Identifier: 00 00 00 00:29:36.861 Multi-path I/O 00:29:36.861 May have multiple subsystem ports: No 00:29:36.861 May have multiple controllers: No 00:29:36.861 Associated with SR-IOV VF: No 00:29:36.861 Max Data Transfer Size: Unlimited 00:29:36.861 Max Number of Namespaces: 0 00:29:36.861 Max Number of I/O Queues: 1024 00:29:36.861 NVMe Specification Version (VS): 1.3 00:29:36.861 NVMe Specification Version (Identify): 1.3 00:29:36.861 Maximum Queue Entries: 1024 00:29:36.861 Contiguous Queues Required: No 00:29:36.861 Arbitration Mechanisms Supported 00:29:36.861 Weighted Round Robin: Not Supported 00:29:36.861 Vendor Specific: Not Supported 00:29:36.861 Reset Timeout: 7500 ms 00:29:36.861 Doorbell Stride: 4 bytes 00:29:36.861 NVM Subsystem Reset: Not Supported 00:29:36.861 Command Sets Supported 00:29:36.861 NVM Command Set: Supported 00:29:36.861 Boot Partition: Not Supported 00:29:36.861 Memory Page Size Minimum: 4096 bytes 00:29:36.861 Memory Page Size Maximum: 4096 bytes 00:29:36.861 Persistent Memory Region: Not Supported 00:29:36.861 Optional Asynchronous Events Supported 00:29:36.861 Namespace Attribute Notices: Not Supported 00:29:36.861 Firmware Activation Notices: Not Supported 00:29:36.861 ANA Change Notices: Not Supported 00:29:36.861 PLE Aggregate Log Change Notices: Not Supported 00:29:36.861 LBA Status Info Alert Notices: Not Supported 00:29:36.861 EGE Aggregate Log Change Notices: Not Supported 00:29:36.861 Normal NVM Subsystem Shutdown event: Not Supported 00:29:36.861 Zone Descriptor Change Notices: Not Supported 00:29:36.861 Discovery Log Change Notices: Supported 00:29:36.861 Controller Attributes 00:29:36.861 128-bit Host Identifier: Not Supported 00:29:36.861 Non-Operational Permissive Mode: Not Supported 00:29:36.861 NVM Sets: Not Supported 00:29:36.861 Read Recovery Levels: Not Supported 00:29:36.861 Endurance Groups: Not Supported 00:29:36.861 Predictable Latency Mode: Not Supported 00:29:36.861 Traffic Based Keep ALive: Not Supported 00:29:36.861 Namespace Granularity: Not Supported 00:29:36.861 SQ Associations: Not Supported 00:29:36.861 UUID List: Not Supported 00:29:36.861 Multi-Domain Subsystem: Not Supported 00:29:36.861 Fixed Capacity Management: Not Supported 00:29:36.861 Variable Capacity Management: Not Supported 00:29:36.861 Delete Endurance Group: Not Supported 00:29:36.861 Delete NVM Set: Not Supported 00:29:36.861 Extended LBA Formats Supported: Not Supported 00:29:36.861 Flexible Data Placement Supported: Not Supported 00:29:36.861 00:29:36.861 Controller Memory Buffer Support 00:29:36.861 ================================ 00:29:36.861 Supported: No 00:29:36.861 00:29:36.861 Persistent Memory Region Support 00:29:36.861 ================================ 00:29:36.861 Supported: No 00:29:36.861 00:29:36.861 Admin Command Set Attributes 00:29:36.861 ============================ 00:29:36.861 Security Send/Receive: Not Supported 00:29:36.861 Format NVM: Not Supported 00:29:36.861 Firmware Activate/Download: Not Supported 00:29:36.861 Namespace Management: Not Supported 00:29:36.861 Device Self-Test: Not Supported 00:29:36.861 Directives: Not Supported 00:29:36.861 NVMe-MI: Not Supported 00:29:36.861 Virtualization Management: Not Supported 00:29:36.861 Doorbell Buffer Config: Not Supported 00:29:36.861 Get LBA Status Capability: Not Supported 00:29:36.861 Command & Feature Lockdown Capability: Not Supported 00:29:36.861 Abort Command Limit: 1 00:29:36.861 Async Event Request Limit: 1 00:29:36.861 Number of Firmware Slots: N/A 00:29:36.861 Firmware Slot 1 Read-Only: N/A 00:29:36.861 Firmware Activation Without Reset: N/A 00:29:36.861 Multiple Update Detection Support: N/A 00:29:36.861 Firmware Update Granularity: No Information Provided 00:29:36.861 Per-Namespace SMART Log: No 00:29:36.861 Asymmetric Namespace Access Log Page: Not Supported 00:29:36.861 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:36.861 Command Effects Log Page: Not Supported 00:29:36.861 Get Log Page Extended Data: Supported 00:29:36.861 Telemetry Log Pages: Not Supported 00:29:36.861 Persistent Event Log Pages: Not Supported 00:29:36.861 Supported Log Pages Log Page: May Support 00:29:36.861 Commands Supported & Effects Log Page: Not Supported 00:29:36.861 Feature Identifiers & Effects Log Page:May Support 00:29:36.862 NVMe-MI Commands & Effects Log Page: May Support 00:29:36.862 Data Area 4 for Telemetry Log: Not Supported 00:29:36.862 Error Log Page Entries Supported: 1 00:29:36.862 Keep Alive: Not Supported 00:29:36.862 00:29:36.862 NVM Command Set Attributes 00:29:36.862 ========================== 00:29:36.862 Submission Queue Entry Size 00:29:36.862 Max: 1 00:29:36.862 Min: 1 00:29:36.862 Completion Queue Entry Size 00:29:36.862 Max: 1 00:29:36.862 Min: 1 00:29:36.862 Number of Namespaces: 0 00:29:36.862 Compare Command: Not Supported 00:29:36.862 Write Uncorrectable Command: Not Supported 00:29:36.862 Dataset Management Command: Not Supported 00:29:36.862 Write Zeroes Command: Not Supported 00:29:36.862 Set Features Save Field: Not Supported 00:29:36.862 Reservations: Not Supported 00:29:36.862 Timestamp: Not Supported 00:29:36.862 Copy: Not Supported 00:29:36.862 Volatile Write Cache: Not Present 00:29:36.862 Atomic Write Unit (Normal): 1 00:29:36.862 Atomic Write Unit (PFail): 1 00:29:36.862 Atomic Compare & Write Unit: 1 00:29:36.862 Fused Compare & Write: Not Supported 00:29:36.862 Scatter-Gather List 00:29:36.862 SGL Command Set: Supported 00:29:36.862 SGL Keyed: Not Supported 00:29:36.862 SGL Bit Bucket Descriptor: Not Supported 00:29:36.862 SGL Metadata Pointer: Not Supported 00:29:36.862 Oversized SGL: Not Supported 00:29:36.862 SGL Metadata Address: Not Supported 00:29:36.862 SGL Offset: Supported 00:29:36.862 Transport SGL Data Block: Not Supported 00:29:36.862 Replay Protected Memory Block: Not Supported 00:29:36.862 00:29:36.862 Firmware Slot Information 00:29:36.862 ========================= 00:29:36.862 Active slot: 0 00:29:36.862 00:29:36.862 00:29:36.862 Error Log 00:29:36.862 ========= 00:29:36.862 00:29:36.862 Active Namespaces 00:29:36.862 ================= 00:29:36.862 Discovery Log Page 00:29:36.862 ================== 00:29:36.862 Generation Counter: 2 00:29:36.862 Number of Records: 2 00:29:36.862 Record Format: 0 00:29:36.862 00:29:36.862 Discovery Log Entry 0 00:29:36.862 ---------------------- 00:29:36.862 Transport Type: 3 (TCP) 00:29:36.862 Address Family: 1 (IPv4) 00:29:36.862 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:36.862 Entry Flags: 00:29:36.862 Duplicate Returned Information: 0 00:29:36.862 Explicit Persistent Connection Support for Discovery: 0 00:29:36.862 Transport Requirements: 00:29:36.862 Secure Channel: Not Specified 00:29:36.862 Port ID: 1 (0x0001) 00:29:36.862 Controller ID: 65535 (0xffff) 00:29:36.862 Admin Max SQ Size: 32 00:29:36.862 Transport Service Identifier: 4420 00:29:36.862 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:36.862 Transport Address: 10.0.0.1 00:29:36.862 Discovery Log Entry 1 00:29:36.862 ---------------------- 00:29:36.862 Transport Type: 3 (TCP) 00:29:36.862 Address Family: 1 (IPv4) 00:29:36.862 Subsystem Type: 2 (NVM Subsystem) 00:29:36.862 Entry Flags: 00:29:36.862 Duplicate Returned Information: 0 00:29:36.862 Explicit Persistent Connection Support for Discovery: 0 00:29:36.862 Transport Requirements: 00:29:36.862 Secure Channel: Not Specified 00:29:36.862 Port ID: 1 (0x0001) 00:29:36.862 Controller ID: 65535 (0xffff) 00:29:36.862 Admin Max SQ Size: 32 00:29:36.862 Transport Service Identifier: 4420 00:29:36.862 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:29:36.862 Transport Address: 10.0.0.1 00:29:36.862 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:37.121 get_feature(0x01) failed 00:29:37.121 get_feature(0x02) failed 00:29:37.121 get_feature(0x04) failed 00:29:37.121 ===================================================== 00:29:37.121 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:37.121 ===================================================== 00:29:37.121 Controller Capabilities/Features 00:29:37.121 ================================ 00:29:37.121 Vendor ID: 0000 00:29:37.121 Subsystem Vendor ID: 0000 00:29:37.121 Serial Number: 3b189aded48ca7def9c1 00:29:37.121 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:29:37.121 Firmware Version: 6.8.9-20 00:29:37.121 Recommended Arb Burst: 6 00:29:37.121 IEEE OUI Identifier: 00 00 00 00:29:37.121 Multi-path I/O 00:29:37.121 May have multiple subsystem ports: Yes 00:29:37.121 May have multiple controllers: Yes 00:29:37.121 Associated with SR-IOV VF: No 00:29:37.121 Max Data Transfer Size: Unlimited 00:29:37.121 Max Number of Namespaces: 1024 00:29:37.121 Max Number of I/O Queues: 128 00:29:37.121 NVMe Specification Version (VS): 1.3 00:29:37.121 NVMe Specification Version (Identify): 1.3 00:29:37.121 Maximum Queue Entries: 1024 00:29:37.121 Contiguous Queues Required: No 00:29:37.121 Arbitration Mechanisms Supported 00:29:37.121 Weighted Round Robin: Not Supported 00:29:37.121 Vendor Specific: Not Supported 00:29:37.121 Reset Timeout: 7500 ms 00:29:37.121 Doorbell Stride: 4 bytes 00:29:37.121 NVM Subsystem Reset: Not Supported 00:29:37.121 Command Sets Supported 00:29:37.121 NVM Command Set: Supported 00:29:37.121 Boot Partition: Not Supported 00:29:37.121 Memory Page Size Minimum: 4096 bytes 00:29:37.121 Memory Page Size Maximum: 4096 bytes 00:29:37.121 Persistent Memory Region: Not Supported 00:29:37.121 Optional Asynchronous Events Supported 00:29:37.121 Namespace Attribute Notices: Supported 00:29:37.121 Firmware Activation Notices: Not Supported 00:29:37.121 ANA Change Notices: Supported 00:29:37.121 PLE Aggregate Log Change Notices: Not Supported 00:29:37.121 LBA Status Info Alert Notices: Not Supported 00:29:37.121 EGE Aggregate Log Change Notices: Not Supported 00:29:37.121 Normal NVM Subsystem Shutdown event: Not Supported 00:29:37.121 Zone Descriptor Change Notices: Not Supported 00:29:37.121 Discovery Log Change Notices: Not Supported 00:29:37.121 Controller Attributes 00:29:37.121 128-bit Host Identifier: Supported 00:29:37.121 Non-Operational Permissive Mode: Not Supported 00:29:37.121 NVM Sets: Not Supported 00:29:37.121 Read Recovery Levels: Not Supported 00:29:37.121 Endurance Groups: Not Supported 00:29:37.121 Predictable Latency Mode: Not Supported 00:29:37.121 Traffic Based Keep ALive: Supported 00:29:37.121 Namespace Granularity: Not Supported 00:29:37.122 SQ Associations: Not Supported 00:29:37.122 UUID List: Not Supported 00:29:37.122 Multi-Domain Subsystem: Not Supported 00:29:37.122 Fixed Capacity Management: Not Supported 00:29:37.122 Variable Capacity Management: Not Supported 00:29:37.122 Delete Endurance Group: Not Supported 00:29:37.122 Delete NVM Set: Not Supported 00:29:37.122 Extended LBA Formats Supported: Not Supported 00:29:37.122 Flexible Data Placement Supported: Not Supported 00:29:37.122 00:29:37.122 Controller Memory Buffer Support 00:29:37.122 ================================ 00:29:37.122 Supported: No 00:29:37.122 00:29:37.122 Persistent Memory Region Support 00:29:37.122 ================================ 00:29:37.122 Supported: No 00:29:37.122 00:29:37.122 Admin Command Set Attributes 00:29:37.122 ============================ 00:29:37.122 Security Send/Receive: Not Supported 00:29:37.122 Format NVM: Not Supported 00:29:37.122 Firmware Activate/Download: Not Supported 00:29:37.122 Namespace Management: Not Supported 00:29:37.122 Device Self-Test: Not Supported 00:29:37.122 Directives: Not Supported 00:29:37.122 NVMe-MI: Not Supported 00:29:37.122 Virtualization Management: Not Supported 00:29:37.122 Doorbell Buffer Config: Not Supported 00:29:37.122 Get LBA Status Capability: Not Supported 00:29:37.122 Command & Feature Lockdown Capability: Not Supported 00:29:37.122 Abort Command Limit: 4 00:29:37.122 Async Event Request Limit: 4 00:29:37.122 Number of Firmware Slots: N/A 00:29:37.122 Firmware Slot 1 Read-Only: N/A 00:29:37.122 Firmware Activation Without Reset: N/A 00:29:37.122 Multiple Update Detection Support: N/A 00:29:37.122 Firmware Update Granularity: No Information Provided 00:29:37.122 Per-Namespace SMART Log: Yes 00:29:37.122 Asymmetric Namespace Access Log Page: Supported 00:29:37.122 ANA Transition Time : 10 sec 00:29:37.122 00:29:37.122 Asymmetric Namespace Access Capabilities 00:29:37.122 ANA Optimized State : Supported 00:29:37.122 ANA Non-Optimized State : Supported 00:29:37.122 ANA Inaccessible State : Supported 00:29:37.122 ANA Persistent Loss State : Supported 00:29:37.122 ANA Change State : Supported 00:29:37.122 ANAGRPID is not changed : No 00:29:37.122 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:29:37.122 00:29:37.122 ANA Group Identifier Maximum : 128 00:29:37.122 Number of ANA Group Identifiers : 128 00:29:37.122 Max Number of Allowed Namespaces : 1024 00:29:37.122 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:29:37.122 Command Effects Log Page: Supported 00:29:37.122 Get Log Page Extended Data: Supported 00:29:37.122 Telemetry Log Pages: Not Supported 00:29:37.122 Persistent Event Log Pages: Not Supported 00:29:37.122 Supported Log Pages Log Page: May Support 00:29:37.122 Commands Supported & Effects Log Page: Not Supported 00:29:37.122 Feature Identifiers & Effects Log Page:May Support 00:29:37.122 NVMe-MI Commands & Effects Log Page: May Support 00:29:37.122 Data Area 4 for Telemetry Log: Not Supported 00:29:37.122 Error Log Page Entries Supported: 128 00:29:37.122 Keep Alive: Supported 00:29:37.122 Keep Alive Granularity: 1000 ms 00:29:37.122 00:29:37.122 NVM Command Set Attributes 00:29:37.122 ========================== 00:29:37.122 Submission Queue Entry Size 00:29:37.122 Max: 64 00:29:37.122 Min: 64 00:29:37.122 Completion Queue Entry Size 00:29:37.122 Max: 16 00:29:37.122 Min: 16 00:29:37.122 Number of Namespaces: 1024 00:29:37.122 Compare Command: Not Supported 00:29:37.122 Write Uncorrectable Command: Not Supported 00:29:37.122 Dataset Management Command: Supported 00:29:37.122 Write Zeroes Command: Supported 00:29:37.122 Set Features Save Field: Not Supported 00:29:37.122 Reservations: Not Supported 00:29:37.122 Timestamp: Not Supported 00:29:37.122 Copy: Not Supported 00:29:37.122 Volatile Write Cache: Present 00:29:37.122 Atomic Write Unit (Normal): 1 00:29:37.122 Atomic Write Unit (PFail): 1 00:29:37.122 Atomic Compare & Write Unit: 1 00:29:37.122 Fused Compare & Write: Not Supported 00:29:37.122 Scatter-Gather List 00:29:37.122 SGL Command Set: Supported 00:29:37.122 SGL Keyed: Not Supported 00:29:37.122 SGL Bit Bucket Descriptor: Not Supported 00:29:37.122 SGL Metadata Pointer: Not Supported 00:29:37.122 Oversized SGL: Not Supported 00:29:37.122 SGL Metadata Address: Not Supported 00:29:37.122 SGL Offset: Supported 00:29:37.122 Transport SGL Data Block: Not Supported 00:29:37.122 Replay Protected Memory Block: Not Supported 00:29:37.122 00:29:37.122 Firmware Slot Information 00:29:37.122 ========================= 00:29:37.122 Active slot: 0 00:29:37.122 00:29:37.122 Asymmetric Namespace Access 00:29:37.122 =========================== 00:29:37.122 Change Count : 0 00:29:37.122 Number of ANA Group Descriptors : 1 00:29:37.122 ANA Group Descriptor : 0 00:29:37.122 ANA Group ID : 1 00:29:37.122 Number of NSID Values : 1 00:29:37.122 Change Count : 0 00:29:37.122 ANA State : 1 00:29:37.122 Namespace Identifier : 1 00:29:37.122 00:29:37.122 Commands Supported and Effects 00:29:37.122 ============================== 00:29:37.122 Admin Commands 00:29:37.122 -------------- 00:29:37.122 Get Log Page (02h): Supported 00:29:37.122 Identify (06h): Supported 00:29:37.122 Abort (08h): Supported 00:29:37.122 Set Features (09h): Supported 00:29:37.122 Get Features (0Ah): Supported 00:29:37.122 Asynchronous Event Request (0Ch): Supported 00:29:37.122 Keep Alive (18h): Supported 00:29:37.122 I/O Commands 00:29:37.122 ------------ 00:29:37.122 Flush (00h): Supported 00:29:37.122 Write (01h): Supported LBA-Change 00:29:37.122 Read (02h): Supported 00:29:37.122 Write Zeroes (08h): Supported LBA-Change 00:29:37.122 Dataset Management (09h): Supported 00:29:37.122 00:29:37.122 Error Log 00:29:37.122 ========= 00:29:37.122 Entry: 0 00:29:37.122 Error Count: 0x3 00:29:37.122 Submission Queue Id: 0x0 00:29:37.122 Command Id: 0x5 00:29:37.122 Phase Bit: 0 00:29:37.122 Status Code: 0x2 00:29:37.122 Status Code Type: 0x0 00:29:37.122 Do Not Retry: 1 00:29:37.381 Error Location: 0x28 00:29:37.381 LBA: 0x0 00:29:37.381 Namespace: 0x0 00:29:37.381 Vendor Log Page: 0x0 00:29:37.381 ----------- 00:29:37.381 Entry: 1 00:29:37.381 Error Count: 0x2 00:29:37.381 Submission Queue Id: 0x0 00:29:37.381 Command Id: 0x5 00:29:37.381 Phase Bit: 0 00:29:37.381 Status Code: 0x2 00:29:37.381 Status Code Type: 0x0 00:29:37.381 Do Not Retry: 1 00:29:37.381 Error Location: 0x28 00:29:37.381 LBA: 0x0 00:29:37.381 Namespace: 0x0 00:29:37.381 Vendor Log Page: 0x0 00:29:37.381 ----------- 00:29:37.381 Entry: 2 00:29:37.381 Error Count: 0x1 00:29:37.381 Submission Queue Id: 0x0 00:29:37.381 Command Id: 0x4 00:29:37.381 Phase Bit: 0 00:29:37.381 Status Code: 0x2 00:29:37.381 Status Code Type: 0x0 00:29:37.381 Do Not Retry: 1 00:29:37.381 Error Location: 0x28 00:29:37.381 LBA: 0x0 00:29:37.381 Namespace: 0x0 00:29:37.381 Vendor Log Page: 0x0 00:29:37.381 00:29:37.381 Number of Queues 00:29:37.381 ================ 00:29:37.381 Number of I/O Submission Queues: 128 00:29:37.381 Number of I/O Completion Queues: 128 00:29:37.381 00:29:37.381 ZNS Specific Controller Data 00:29:37.381 ============================ 00:29:37.381 Zone Append Size Limit: 0 00:29:37.381 00:29:37.381 00:29:37.381 Active Namespaces 00:29:37.381 ================= 00:29:37.381 get_feature(0x05) failed 00:29:37.381 Namespace ID:1 00:29:37.381 Command Set Identifier: NVM (00h) 00:29:37.381 Deallocate: Supported 00:29:37.381 Deallocated/Unwritten Error: Not Supported 00:29:37.381 Deallocated Read Value: Unknown 00:29:37.381 Deallocate in Write Zeroes: Not Supported 00:29:37.381 Deallocated Guard Field: 0xFFFF 00:29:37.381 Flush: Supported 00:29:37.381 Reservation: Not Supported 00:29:37.381 Namespace Sharing Capabilities: Multiple Controllers 00:29:37.381 Size (in LBAs): 1310720 (5GiB) 00:29:37.381 Capacity (in LBAs): 1310720 (5GiB) 00:29:37.381 Utilization (in LBAs): 1310720 (5GiB) 00:29:37.381 UUID: 9209a3b8-278f-44b3-90f8-23a60a220020 00:29:37.381 Thin Provisioning: Not Supported 00:29:37.381 Per-NS Atomic Units: Yes 00:29:37.381 Atomic Boundary Size (Normal): 0 00:29:37.381 Atomic Boundary Size (PFail): 0 00:29:37.381 Atomic Boundary Offset: 0 00:29:37.381 NGUID/EUI64 Never Reused: No 00:29:37.381 ANA group ID: 1 00:29:37.381 Namespace Write Protected: No 00:29:37.381 Number of LBA Formats: 1 00:29:37.381 Current LBA Format: LBA Format #00 00:29:37.381 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:29:37.381 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:37.381 rmmod nvme_tcp 00:29:37.381 rmmod nvme_fabrics 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:37.381 10:01:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:37.652 10:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:37.653 10:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:37.653 10:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:37.653 10:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:37.653 10:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:37.653 10:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.653 10:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.653 10:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.653 10:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:29:37.653 10:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:29:37.653 10:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:37.653 10:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:29:37.653 10:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:37.653 10:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:37.653 10:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:37.653 10:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:37.653 10:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:37.653 10:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:37.653 10:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:38.590 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:38.590 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:38.590 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:38.590 ************************************ 00:29:38.590 END TEST nvmf_identify_kernel_target 00:29:38.590 ************************************ 00:29:38.590 00:29:38.590 real 0m3.499s 00:29:38.590 user 0m1.224s 00:29:38.590 sys 0m1.631s 00:29:38.590 10:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:38.590 10:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:38.590 10:01:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:38.590 10:01:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:38.590 10:01:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:38.590 10:01:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.590 ************************************ 00:29:38.590 START TEST nvmf_auth_host 00:29:38.590 ************************************ 00:29:38.590 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:38.849 * Looking for test storage... 00:29:38.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:38.849 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:38.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.850 --rc genhtml_branch_coverage=1 00:29:38.850 --rc genhtml_function_coverage=1 00:29:38.850 --rc genhtml_legend=1 00:29:38.850 --rc geninfo_all_blocks=1 00:29:38.850 --rc geninfo_unexecuted_blocks=1 00:29:38.850 00:29:38.850 ' 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:38.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.850 --rc genhtml_branch_coverage=1 00:29:38.850 --rc genhtml_function_coverage=1 00:29:38.850 --rc genhtml_legend=1 00:29:38.850 --rc geninfo_all_blocks=1 00:29:38.850 --rc geninfo_unexecuted_blocks=1 00:29:38.850 00:29:38.850 ' 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:38.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.850 --rc genhtml_branch_coverage=1 00:29:38.850 --rc genhtml_function_coverage=1 00:29:38.850 --rc genhtml_legend=1 00:29:38.850 --rc geninfo_all_blocks=1 00:29:38.850 --rc geninfo_unexecuted_blocks=1 00:29:38.850 00:29:38.850 ' 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:38.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.850 --rc genhtml_branch_coverage=1 00:29:38.850 --rc genhtml_function_coverage=1 00:29:38.850 --rc genhtml_legend=1 00:29:38.850 --rc geninfo_all_blocks=1 00:29:38.850 --rc geninfo_unexecuted_blocks=1 00:29:38.850 00:29:38.850 ' 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:38.850 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:38.850 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:38.851 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:38.851 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:38.851 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:38.851 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:38.851 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:38.851 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:38.851 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:38.851 Cannot find device "nvmf_init_br" 00:29:38.851 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:29:38.851 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:38.851 Cannot find device "nvmf_init_br2" 00:29:38.851 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:29:38.851 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:38.851 Cannot find device "nvmf_tgt_br" 00:29:38.851 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:29:38.851 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:38.851 Cannot find device "nvmf_tgt_br2" 00:29:38.851 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:29:38.851 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:38.851 Cannot find device "nvmf_init_br" 00:29:38.851 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:29:38.851 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:38.851 Cannot find device "nvmf_init_br2" 00:29:38.851 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:29:38.851 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:39.109 Cannot find device "nvmf_tgt_br" 00:29:39.109 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:29:39.109 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:39.109 Cannot find device "nvmf_tgt_br2" 00:29:39.109 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:39.110 Cannot find device "nvmf_br" 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:39.110 Cannot find device "nvmf_init_if" 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:39.110 Cannot find device "nvmf_init_if2" 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:39.110 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:39.110 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:39.110 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:39.368 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:39.368 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:39.368 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:39.368 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:39.368 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:39.368 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:39.368 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:39.368 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:39.368 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:29:39.368 00:29:39.368 --- 10.0.0.3 ping statistics --- 00:29:39.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.368 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:39.369 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:39.369 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:29:39.369 00:29:39.369 --- 10.0.0.4 ping statistics --- 00:29:39.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.369 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:39.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:39.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:29:39.369 00:29:39.369 --- 10.0.0.1 ping statistics --- 00:29:39.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.369 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:39.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:39.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:29:39.369 00:29:39.369 --- 10.0.0.2 ping statistics --- 00:29:39.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.369 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=102668 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 102668 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 102668 ']' 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:39.369 10:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.307 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:40.307 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:40.307 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:40.307 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:40.307 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.567 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:40.567 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:40.567 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:40.567 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:40.567 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:40.567 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:40.567 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:40.567 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:40.567 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:40.567 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=43ae7ef36183b00e739dd136f76f363b 00:29:40.567 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:40.567 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.gP1 00:29:40.567 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 43ae7ef36183b00e739dd136f76f363b 0 00:29:40.567 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 43ae7ef36183b00e739dd136f76f363b 0 00:29:40.567 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:40.567 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:40.567 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=43ae7ef36183b00e739dd136f76f363b 00:29:40.567 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:40.567 10:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:40.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.gP1 00:29:40.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.gP1 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.gP1 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=eb8019281bbf2b9bba214c095fdac6443ff979f869aed650d5e842556042fd94 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.EqM 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key eb8019281bbf2b9bba214c095fdac6443ff979f869aed650d5e842556042fd94 3 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 eb8019281bbf2b9bba214c095fdac6443ff979f869aed650d5e842556042fd94 3 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=eb8019281bbf2b9bba214c095fdac6443ff979f869aed650d5e842556042fd94 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.EqM 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.EqM 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.EqM 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9372817ecf5c7e47cac86435589575dee1a94c9cb4cfad32 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Pqa 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9372817ecf5c7e47cac86435589575dee1a94c9cb4cfad32 0 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9372817ecf5c7e47cac86435589575dee1a94c9cb4cfad32 0 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9372817ecf5c7e47cac86435589575dee1a94c9cb4cfad32 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Pqa 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Pqa 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Pqa 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=60a3707b1669e926bee87e111faf2f5d4e38ded60d13f0c2 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.do7 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 60a3707b1669e926bee87e111faf2f5d4e38ded60d13f0c2 2 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 60a3707b1669e926bee87e111faf2f5d4e38ded60d13f0c2 2 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=60a3707b1669e926bee87e111faf2f5d4e38ded60d13f0c2 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:40.568 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.do7 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.do7 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.do7 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c336903de44cc15385ec3a11081d3bcb 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.x2A 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c336903de44cc15385ec3a11081d3bcb 1 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c336903de44cc15385ec3a11081d3bcb 1 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c336903de44cc15385ec3a11081d3bcb 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.x2A 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.x2A 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.x2A 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=588edbbe818976080ef4a4fe9e9fb430 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.EQv 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 588edbbe818976080ef4a4fe9e9fb430 1 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 588edbbe818976080ef4a4fe9e9fb430 1 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=588edbbe818976080ef4a4fe9e9fb430 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.EQv 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.EQv 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.EQv 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f7f97c4c4ab202616764fc25e4df725b898553df1f52a361 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.C5l 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f7f97c4c4ab202616764fc25e4df725b898553df1f52a361 2 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f7f97c4c4ab202616764fc25e4df725b898553df1f52a361 2 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f7f97c4c4ab202616764fc25e4df725b898553df1f52a361 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.C5l 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.C5l 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.C5l 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=65ed60cf046c57d5eccd64775e7dc865 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.pkQ 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 65ed60cf046c57d5eccd64775e7dc865 0 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 65ed60cf046c57d5eccd64775e7dc865 0 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=65ed60cf046c57d5eccd64775e7dc865 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:40.828 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.pkQ 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.pkQ 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.pkQ 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2875089de986555c917423569a5a62ec0f3dbc1b15dae8cdf1571ddb5f2cec81 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.eWl 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2875089de986555c917423569a5a62ec0f3dbc1b15dae8cdf1571ddb5f2cec81 3 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2875089de986555c917423569a5a62ec0f3dbc1b15dae8cdf1571ddb5f2cec81 3 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2875089de986555c917423569a5a62ec0f3dbc1b15dae8cdf1571ddb5f2cec81 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.eWl 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.eWl 00:29:41.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.eWl 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 102668 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 102668 ']' 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.088 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.347 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.347 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:41.347 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:41.347 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gP1 00:29:41.347 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.347 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.347 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.347 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.EqM ]] 00:29:41.347 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EqM 00:29:41.347 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.347 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.347 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.347 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:41.347 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Pqa 00:29:41.347 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.do7 ]] 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.do7 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.x2A 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.EQv ]] 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EQv 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.C5l 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.pkQ ]] 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.pkQ 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.eWl 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:41.348 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:41.607 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:41.607 Waiting for block devices as requested 00:29:41.865 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:41.865 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:42.433 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:42.433 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:42.433 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:29:42.433 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:42.433 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:42.433 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:42.433 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:29:42.433 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:42.433 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:29:42.433 No valid GPT data, bailing 00:29:42.433 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:42.433 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:29:42.433 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:29:42.434 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:29:42.434 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:42.434 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:29:42.434 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:29:42.434 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:29:42.434 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:29:42.434 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:42.434 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:29:42.434 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:29:42.434 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:29:42.434 No valid GPT data, bailing 00:29:42.434 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:29:42.434 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:29:42.434 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:29:42.434 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:29:42.434 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:42.434 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:29:42.434 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:29:42.434 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:29:42.434 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:29:42.434 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:42.434 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:29:42.434 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:29:42.434 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:29:42.693 No valid GPT data, bailing 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:29:42.693 No valid GPT data, bailing 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -a 10.0.0.1 -t tcp -s 4420 00:29:42.693 00:29:42.693 Discovery Log Number of Records 2, Generation counter 2 00:29:42.693 =====Discovery Log Entry 0====== 00:29:42.693 trtype: tcp 00:29:42.693 adrfam: ipv4 00:29:42.693 subtype: current discovery subsystem 00:29:42.693 treq: not specified, sq flow control disable supported 00:29:42.693 portid: 1 00:29:42.693 trsvcid: 4420 00:29:42.693 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:42.693 traddr: 10.0.0.1 00:29:42.693 eflags: none 00:29:42.693 sectype: none 00:29:42.693 =====Discovery Log Entry 1====== 00:29:42.693 trtype: tcp 00:29:42.693 adrfam: ipv4 00:29:42.693 subtype: nvme subsystem 00:29:42.693 treq: not specified, sq flow control disable supported 00:29:42.693 portid: 1 00:29:42.693 trsvcid: 4420 00:29:42.693 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:42.693 traddr: 10.0.0.1 00:29:42.693 eflags: none 00:29:42.693 sectype: none 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:42.693 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: ]] 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.952 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.953 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:42.953 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.953 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:42.953 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:42.953 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:42.953 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:42.953 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.953 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.953 nvme0n1 00:29:42.953 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.953 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.953 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.953 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.953 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.953 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.953 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.953 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.953 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.953 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: ]] 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.212 nvme0n1 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: ]] 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:43.212 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.213 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:43.213 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:43.213 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:43.213 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:43.213 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.213 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.471 nvme0n1 00:29:43.471 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.471 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.471 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.471 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.471 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.471 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.471 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.471 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.471 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.471 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.471 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.471 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.471 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: ]] 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.472 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.472 nvme0n1 00:29:43.472 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.472 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.472 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.472 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.472 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.472 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: ]] 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:43.730 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.731 nvme0n1 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.731 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.990 nvme0n1 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:43.990 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: ]] 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.249 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.508 nvme0n1 00:29:44.508 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.508 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.508 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.508 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.508 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.508 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.508 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.508 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.508 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.508 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.508 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: ]] 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.509 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.768 nvme0n1 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: ]] 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.768 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.027 nvme0n1 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: ]] 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.027 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.028 nvme0n1 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.028 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.287 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.287 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.288 nvme0n1 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:45.288 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: ]] 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.856 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.115 nvme0n1 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: ]] 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.115 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.116 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.116 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:46.116 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:46.116 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:46.116 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.116 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.116 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:46.116 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.116 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:46.116 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:46.116 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:46.116 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:46.116 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.116 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.375 nvme0n1 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: ]] 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.376 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.635 nvme0n1 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: ]] 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.635 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.636 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:46.636 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:46.636 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:46.636 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.636 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.636 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:46.636 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.636 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:46.636 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:46.636 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:46.636 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:46.636 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.636 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.895 nvme0n1 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.895 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.154 nvme0n1 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:47.154 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:49.056 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:49.056 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: ]] 00:29:49.056 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:49.056 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:49.056 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.056 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:49.056 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:49.056 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:49.056 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.056 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:49.056 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.056 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.057 nvme0n1 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: ]] 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.057 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.329 nvme0n1 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: ]] 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.329 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.613 nvme0n1 00:29:49.613 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.613 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.613 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.613 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.613 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: ]] 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:49.884 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:49.885 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:49.885 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.885 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.144 nvme0n1 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.144 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.403 nvme0n1 00:29:50.403 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.403 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.403 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.403 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:50.403 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.403 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: ]] 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.663 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.231 nvme0n1 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: ]] 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.231 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.232 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:51.232 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:51.232 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:51.232 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:51.232 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.232 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.232 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:51.232 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.232 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:51.232 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:51.232 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:51.232 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:51.232 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.232 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.799 nvme0n1 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: ]] 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.799 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.367 nvme0n1 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: ]] 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.367 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.368 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:52.368 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:52.368 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:52.368 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:52.368 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.368 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.368 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:52.368 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.368 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:52.368 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:52.368 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:52.368 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:52.368 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.368 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.935 nvme0n1 00:29:52.935 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.935 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.935 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.935 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:52.935 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.935 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.194 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.762 nvme0n1 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: ]] 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.762 nvme0n1 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: ]] 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.762 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:53.763 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.763 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:53.763 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:53.763 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:53.763 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:53.763 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.763 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.021 nvme0n1 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: ]] 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:54.021 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:54.022 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:54.022 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.022 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.280 nvme0n1 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: ]] 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.280 nvme0n1 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.280 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:54.538 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:54.539 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.539 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:54.539 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.539 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.539 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.539 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.539 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:54.539 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:54.539 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:54.539 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.539 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.539 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:54.539 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.539 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:54.539 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:54.539 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:54.539 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:54.539 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.539 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.539 nvme0n1 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: ]] 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.539 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.798 nvme0n1 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: ]] 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.798 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.057 nvme0n1 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: ]] 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:55.057 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.058 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.058 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:55.058 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.058 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:55.058 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:55.058 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:55.058 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:55.058 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.058 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.058 nvme0n1 00:29:55.058 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.058 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:55.058 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.058 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.058 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.058 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.317 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: ]] 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.318 nvme0n1 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.318 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.578 nvme0n1 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: ]] 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.578 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.837 nvme0n1 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: ]] 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:55.837 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:55.838 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:55.838 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.838 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.096 nvme0n1 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: ]] 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.097 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.356 nvme0n1 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: ]] 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.357 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.616 nvme0n1 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.616 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.876 nvme0n1 00:29:56.876 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.876 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.876 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.876 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.876 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:56.876 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: ]] 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.135 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.394 nvme0n1 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: ]] 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.394 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.653 nvme0n1 00:29:57.653 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.653 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.653 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:57.653 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.653 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.653 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: ]] 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.911 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.169 nvme0n1 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: ]] 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:58.169 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:58.170 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:58.170 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:58.170 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.170 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.427 nvme0n1 00:29:58.427 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.427 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:58.427 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:58.427 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.427 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.427 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.686 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.944 nvme0n1 00:29:58.944 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.944 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:58.944 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:58.944 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: ]] 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.945 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.512 nvme0n1 00:29:59.512 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.512 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:59.512 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.512 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:59.512 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.512 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.512 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:59.512 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:59.512 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.512 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: ]] 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.513 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.080 nvme0n1 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: ]] 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.081 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.649 nvme0n1 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: ]] 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.649 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.216 nvme0n1 00:30:01.216 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.216 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:01.216 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.216 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.216 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:01.216 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:01.475 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.476 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.476 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:01.476 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.476 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:01.476 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:01.476 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:01.476 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:01.476 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.476 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.042 nvme0n1 00:30:02.042 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.042 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.042 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:02.042 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: ]] 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.043 nvme0n1 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: ]] 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.043 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.302 nvme0n1 00:30:02.302 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.302 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.302 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:02.302 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: ]] 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.303 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.590 nvme0n1 00:30:02.590 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.590 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.590 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:02.590 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.590 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.590 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.590 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.590 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.590 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.590 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: ]] 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.590 nvme0n1 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.590 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.591 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.850 nvme0n1 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: ]] 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.850 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.109 nvme0n1 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: ]] 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.109 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.110 nvme0n1 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.110 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.368 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.368 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.368 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.368 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: ]] 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.369 nvme0n1 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: ]] 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.369 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.370 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.370 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.370 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:03.370 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.370 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.629 nvme0n1 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.629 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.887 nvme0n1 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: ]] 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.887 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.888 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.888 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.888 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.888 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.888 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.888 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.888 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.888 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.888 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.888 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:03.888 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.888 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.146 nvme0n1 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: ]] 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.146 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.405 nvme0n1 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: ]] 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.405 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.664 nvme0n1 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: ]] 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.664 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.922 nvme0n1 00:30:04.922 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.922 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.922 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.922 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.922 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.922 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.922 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.922 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.922 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.922 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.923 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.181 nvme0n1 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: ]] 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.181 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.747 nvme0n1 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: ]] 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.747 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.006 nvme0n1 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: ]] 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.006 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.264 nvme0n1 00:30:06.264 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.264 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.264 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.264 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.264 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.523 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.523 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.523 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.523 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.523 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.523 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.523 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.523 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:30:06.523 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.523 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:06.523 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:06.523 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:06.523 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:30:06.523 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:30:06.523 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:06.523 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: ]] 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.524 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.783 nvme0n1 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.783 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.043 nvme0n1 00:30:07.043 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.043 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.043 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.043 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.043 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:07.043 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNhZTdlZjM2MTgzYjAwZTczOWRkMTM2Zjc2ZjM2M2Lx1FTr: 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: ]] 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWI4MDE5MjgxYmJmMmI5YmJhMjE0YzA5NWZkYWM2NDQzZmY5NzlmODY5YWVkNjUwZDVlODQyNTU2MDQyZmQ5NBLVbjc=: 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.302 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.869 nvme0n1 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: ]] 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.869 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.870 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.870 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.870 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.870 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.870 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:07.870 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.870 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.435 nvme0n1 00:30:08.435 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.435 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:08.435 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: ]] 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.436 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.002 nvme0n1 00:30:09.002 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.002 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.002 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.002 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.002 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.002 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjdmOTdjNGM0YWIyMDI2MTY3NjRmYzI1ZTRkZjcyNWI4OTg1NTNkZjFmNTJhMzYxyG2IWA==: 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: ]] 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlZDYwY2YwNDZjNTdkNWVjY2Q2NDc3NWU3ZGM4NjX9ZFAN: 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.003 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.570 nvme0n1 00:30:09.570 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.570 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg3NTA4OWRlOTg2NTU1YzkxNzQyMzU2OWE1YTYyZWMwZjNkYmMxYjE1ZGFlOGNkZjE1NzFkZGI1ZjJjZWM4MZkQRZg=: 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.571 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.571 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:09.571 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.571 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.139 nvme0n1 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: ]] 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.139 2024/12/05 10:01:57 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:30:10.139 request: 00:30:10.139 { 00:30:10.139 "method": "bdev_nvme_attach_controller", 00:30:10.139 "params": { 00:30:10.139 "name": "nvme0", 00:30:10.139 "trtype": "tcp", 00:30:10.139 "traddr": "10.0.0.1", 00:30:10.139 "adrfam": "ipv4", 00:30:10.139 "trsvcid": "4420", 00:30:10.139 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:10.139 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:10.139 "prchk_reftag": false, 00:30:10.139 "prchk_guard": false, 00:30:10.139 "hdgst": false, 00:30:10.139 "ddgst": false, 00:30:10.139 "allow_unrecognized_csi": false 00:30:10.139 } 00:30:10.139 } 00:30:10.139 Got JSON-RPC error response 00:30:10.139 GoRPCClient: error on JSON-RPC call 00:30:10.139 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.140 2024/12/05 10:01:57 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:30:10.140 request: 00:30:10.140 { 00:30:10.140 "method": "bdev_nvme_attach_controller", 00:30:10.140 "params": { 00:30:10.140 "name": "nvme0", 00:30:10.140 "trtype": "tcp", 00:30:10.140 "traddr": "10.0.0.1", 00:30:10.140 "adrfam": "ipv4", 00:30:10.140 "trsvcid": "4420", 00:30:10.140 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:10.140 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:10.140 "prchk_reftag": false, 00:30:10.140 "prchk_guard": false, 00:30:10.140 "hdgst": false, 00:30:10.140 "ddgst": false, 00:30:10.140 "dhchap_key": "key2", 00:30:10.140 "allow_unrecognized_csi": false 00:30:10.140 } 00:30:10.140 } 00:30:10.140 Got JSON-RPC error response 00:30:10.140 GoRPCClient: error on JSON-RPC call 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.140 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.400 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.400 2024/12/05 10:01:57 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:30:10.400 request: 00:30:10.400 { 00:30:10.400 "method": "bdev_nvme_attach_controller", 00:30:10.400 "params": { 00:30:10.400 "name": "nvme0", 00:30:10.400 "trtype": "tcp", 00:30:10.400 "traddr": "10.0.0.1", 00:30:10.400 "adrfam": "ipv4", 00:30:10.400 "trsvcid": "4420", 00:30:10.401 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:10.401 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:10.401 "prchk_reftag": false, 00:30:10.401 "prchk_guard": false, 00:30:10.401 "hdgst": false, 00:30:10.401 "ddgst": false, 00:30:10.401 "dhchap_key": "key1", 00:30:10.401 "dhchap_ctrlr_key": "ckey2", 00:30:10.401 "allow_unrecognized_csi": false 00:30:10.401 } 00:30:10.401 } 00:30:10.401 Got JSON-RPC error response 00:30:10.401 GoRPCClient: error on JSON-RPC call 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.401 nvme0n1 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: ]] 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.401 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.660 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.660 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.661 2024/12/05 10:01:58 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:30:10.661 request: 00:30:10.661 { 00:30:10.661 "method": "bdev_nvme_set_keys", 00:30:10.661 "params": { 00:30:10.661 "name": "nvme0", 00:30:10.661 "dhchap_key": "key1", 00:30:10.661 "dhchap_ctrlr_key": "ckey2" 00:30:10.661 } 00:30:10.661 } 00:30:10.661 Got JSON-RPC error response 00:30:10.661 GoRPCClient: error on JSON-RPC call 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:30:10.661 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM3MjgxN2VjZjVjN2U0N2NhYzg2NDM1NTg5NTc1ZGVlMWE5NGM5Y2I0Y2ZhZDMybfzL+w==: 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: ]] 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBhMzcwN2IxNjY5ZTkyNmJlZTg3ZTExMWZhZjJmNWQ0ZTM4ZGVkNjBkMTNmMGMyNcNXxw==: 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.597 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.856 nvme0n1 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzMzNjkwM2RlNDRjYzE1Mzg1ZWMzYTExMDgxZDNiY2K6D6Ev: 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: ]] 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg4ZWRiYmU4MTg5NzYwODBlZjRhNGZlOWU5ZmI0MzBvrehm: 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.856 2024/12/05 10:01:59 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:30:11.856 request: 00:30:11.856 { 00:30:11.856 "method": "bdev_nvme_set_keys", 00:30:11.856 "params": { 00:30:11.856 "name": "nvme0", 00:30:11.856 "dhchap_key": "key2", 00:30:11.856 "dhchap_ctrlr_key": "ckey1" 00:30:11.856 } 00:30:11.856 } 00:30:11.856 Got JSON-RPC error response 00:30:11.856 GoRPCClient: error on JSON-RPC call 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.856 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.857 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:30:11.857 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:30:12.793 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:30:12.793 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.793 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:30:12.793 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.793 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:13.052 rmmod nvme_tcp 00:30:13.052 rmmod nvme_fabrics 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 102668 ']' 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 102668 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 102668 ']' 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 102668 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102668 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:13.052 killing process with pid 102668 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102668' 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 102668 00:30:13.052 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 102668 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:30:13.989 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:14.248 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:14.248 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:14.248 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:14.248 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:30:14.248 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:30:14.248 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:14.837 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:14.837 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:15.120 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:15.120 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.gP1 /tmp/spdk.key-null.Pqa /tmp/spdk.key-sha256.x2A /tmp/spdk.key-sha384.C5l /tmp/spdk.key-sha512.eWl /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:30:15.120 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:15.392 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:15.392 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:15.392 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:15.651 00:30:15.651 real 0m36.897s 00:30:15.651 user 0m33.372s 00:30:15.651 sys 0m4.457s 00:30:15.651 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:15.651 ************************************ 00:30:15.651 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.651 END TEST nvmf_auth_host 00:30:15.651 ************************************ 00:30:15.651 10:02:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:30:15.651 10:02:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:15.651 10:02:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:15.651 10:02:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:15.651 10:02:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.651 ************************************ 00:30:15.651 START TEST nvmf_digest 00:30:15.651 ************************************ 00:30:15.651 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:15.651 * Looking for test storage... 00:30:15.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:15.651 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:15.651 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:15.651 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:30:15.911 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:15.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.912 --rc genhtml_branch_coverage=1 00:30:15.912 --rc genhtml_function_coverage=1 00:30:15.912 --rc genhtml_legend=1 00:30:15.912 --rc geninfo_all_blocks=1 00:30:15.912 --rc geninfo_unexecuted_blocks=1 00:30:15.912 00:30:15.912 ' 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:15.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.912 --rc genhtml_branch_coverage=1 00:30:15.912 --rc genhtml_function_coverage=1 00:30:15.912 --rc genhtml_legend=1 00:30:15.912 --rc geninfo_all_blocks=1 00:30:15.912 --rc geninfo_unexecuted_blocks=1 00:30:15.912 00:30:15.912 ' 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:15.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.912 --rc genhtml_branch_coverage=1 00:30:15.912 --rc genhtml_function_coverage=1 00:30:15.912 --rc genhtml_legend=1 00:30:15.912 --rc geninfo_all_blocks=1 00:30:15.912 --rc geninfo_unexecuted_blocks=1 00:30:15.912 00:30:15.912 ' 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:15.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.912 --rc genhtml_branch_coverage=1 00:30:15.912 --rc genhtml_function_coverage=1 00:30:15.912 --rc genhtml_legend=1 00:30:15.912 --rc geninfo_all_blocks=1 00:30:15.912 --rc geninfo_unexecuted_blocks=1 00:30:15.912 00:30:15.912 ' 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:15.912 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:15.912 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:15.913 Cannot find device "nvmf_init_br" 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:15.913 Cannot find device "nvmf_init_br2" 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:15.913 Cannot find device "nvmf_tgt_br" 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:15.913 Cannot find device "nvmf_tgt_br2" 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:15.913 Cannot find device "nvmf_init_br" 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:15.913 Cannot find device "nvmf_init_br2" 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:15.913 Cannot find device "nvmf_tgt_br" 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:15.913 Cannot find device "nvmf_tgt_br2" 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:15.913 Cannot find device "nvmf_br" 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:15.913 Cannot find device "nvmf_init_if" 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:15.913 Cannot find device "nvmf_init_if2" 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:15.913 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:15.913 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:15.913 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:16.171 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:16.171 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:16.171 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:30:16.171 00:30:16.171 --- 10.0.0.3 ping statistics --- 00:30:16.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.171 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:16.172 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:16.172 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.092 ms 00:30:16.172 00:30:16.172 --- 10.0.0.4 ping statistics --- 00:30:16.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.172 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:16.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:30:16.172 00:30:16.172 --- 10.0.0.1 ping statistics --- 00:30:16.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.172 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:16.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:30:16.172 00:30:16.172 --- 10.0.0.2 ping statistics --- 00:30:16.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.172 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:16.172 ************************************ 00:30:16.172 START TEST nvmf_digest_clean 00:30:16.172 ************************************ 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=104330 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 104330 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 104330 ']' 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:16.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:16.172 10:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:16.430 [2024-12-05 10:02:03.924038] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:16.430 [2024-12-05 10:02:03.924215] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.688 [2024-12-05 10:02:04.120613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.688 [2024-12-05 10:02:04.281257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.688 [2024-12-05 10:02:04.281349] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.688 [2024-12-05 10:02:04.281375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.688 [2024-12-05 10:02:04.281413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.688 [2024-12-05 10:02:04.281435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.688 [2024-12-05 10:02:04.283119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.623 10:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:17.623 10:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:17.623 10:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:17.623 10:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:17.623 10:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:17.623 10:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.623 10:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:17.623 10:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:30:17.623 10:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:30:17.623 10:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.623 10:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:17.881 null0 00:30:17.881 [2024-12-05 10:02:05.304650] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.881 [2024-12-05 10:02:05.328856] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:17.881 10:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.881 10:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:17.881 10:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:17.881 10:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:17.881 10:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:17.881 10:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:17.881 10:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:17.881 10:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:17.881 10:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=104386 00:30:17.881 10:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:17.881 10:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 104386 /var/tmp/bperf.sock 00:30:17.881 10:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 104386 ']' 00:30:17.881 10:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:17.881 10:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:17.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:17.881 10:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:17.881 10:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:17.881 10:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:17.881 [2024-12-05 10:02:05.458797] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:17.881 [2024-12-05 10:02:05.459005] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104386 ] 00:30:18.139 [2024-12-05 10:02:05.644228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.139 [2024-12-05 10:02:05.764210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.704 10:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:18.704 10:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:18.704 10:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:18.704 10:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:18.704 10:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:19.272 10:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:19.272 10:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:19.839 nvme0n1 00:30:19.839 10:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:19.839 10:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:19.839 Running I/O for 2 seconds... 00:30:21.710 17137.00 IOPS, 66.94 MiB/s [2024-12-05T10:02:09.339Z] 17602.00 IOPS, 68.76 MiB/s 00:30:21.710 Latency(us) 00:30:21.710 [2024-12-05T10:02:09.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.710 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:21.710 nvme0n1 : 2.00 17627.03 68.86 0.00 0.00 7253.99 3127.85 17277.67 00:30:21.710 [2024-12-05T10:02:09.339Z] =================================================================================================================== 00:30:21.710 [2024-12-05T10:02:09.339Z] Total : 17627.03 68.86 0.00 0.00 7253.99 3127.85 17277.67 00:30:21.710 { 00:30:21.710 "results": [ 00:30:21.710 { 00:30:21.710 "job": "nvme0n1", 00:30:21.710 "core_mask": "0x2", 00:30:21.710 "workload": "randread", 00:30:21.710 "status": "finished", 00:30:21.710 "queue_depth": 128, 00:30:21.710 "io_size": 4096, 00:30:21.710 "runtime": 2.004705, 00:30:21.710 "iops": 17627.032406264265, 00:30:21.710 "mibps": 68.85559533696978, 00:30:21.710 "io_failed": 0, 00:30:21.710 "io_timeout": 0, 00:30:21.710 "avg_latency_us": 7253.986526612591, 00:30:21.710 "min_latency_us": 3127.8545454545456, 00:30:21.710 "max_latency_us": 17277.672727272726 00:30:21.710 } 00:30:21.710 ], 00:30:21.710 "core_count": 1 00:30:21.710 } 00:30:21.969 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:21.969 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:21.969 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:21.969 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:21.969 | select(.opcode=="crc32c") 00:30:21.969 | "\(.module_name) \(.executed)"' 00:30:21.969 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:22.228 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:22.228 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:22.228 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:22.228 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:22.228 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 104386 00:30:22.229 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 104386 ']' 00:30:22.229 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 104386 00:30:22.229 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:22.229 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:22.229 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104386 00:30:22.229 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:22.229 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:22.229 killing process with pid 104386 00:30:22.229 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104386' 00:30:22.229 Received shutdown signal, test time was about 2.000000 seconds 00:30:22.229 00:30:22.229 Latency(us) 00:30:22.229 [2024-12-05T10:02:09.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.229 [2024-12-05T10:02:09.858Z] =================================================================================================================== 00:30:22.229 [2024-12-05T10:02:09.858Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:22.229 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 104386 00:30:22.229 10:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 104386 00:30:22.798 10:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:22.798 10:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:22.798 10:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:22.798 10:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:22.798 10:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:22.798 10:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:22.798 10:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:22.798 10:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:22.798 10:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=104477 00:30:22.798 10:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 104477 /var/tmp/bperf.sock 00:30:22.798 10:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 104477 ']' 00:30:22.798 10:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:22.798 10:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:22.798 10:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:22.799 10:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.799 10:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:23.058 [2024-12-05 10:02:10.486445] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:23.058 [2024-12-05 10:02:10.486592] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104477 ] 00:30:23.058 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:23.058 Zero copy mechanism will not be used. 00:30:23.058 [2024-12-05 10:02:10.652826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.317 [2024-12-05 10:02:10.766586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.884 10:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:23.884 10:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:23.884 10:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:23.884 10:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:23.884 10:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:24.454 10:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:24.454 10:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:24.714 nvme0n1 00:30:24.714 10:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:24.714 10:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:24.974 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:24.974 Zero copy mechanism will not be used. 00:30:24.974 Running I/O for 2 seconds... 00:30:26.846 7729.00 IOPS, 966.12 MiB/s [2024-12-05T10:02:14.475Z] 7860.00 IOPS, 982.50 MiB/s 00:30:26.846 Latency(us) 00:30:26.846 [2024-12-05T10:02:14.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.846 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:26.846 nvme0n1 : 2.00 7859.11 982.39 0.00 0.00 2031.80 729.83 10902.81 00:30:26.846 [2024-12-05T10:02:14.475Z] =================================================================================================================== 00:30:26.846 [2024-12-05T10:02:14.475Z] Total : 7859.11 982.39 0.00 0.00 2031.80 729.83 10902.81 00:30:26.846 { 00:30:26.846 "results": [ 00:30:26.846 { 00:30:26.846 "job": "nvme0n1", 00:30:26.846 "core_mask": "0x2", 00:30:26.846 "workload": "randread", 00:30:26.846 "status": "finished", 00:30:26.846 "queue_depth": 16, 00:30:26.846 "io_size": 131072, 00:30:26.846 "runtime": 2.002644, 00:30:26.846 "iops": 7859.110256241249, 00:30:26.846 "mibps": 982.3887820301561, 00:30:26.846 "io_failed": 0, 00:30:26.846 "io_timeout": 0, 00:30:26.846 "avg_latency_us": 2031.799496560368, 00:30:26.846 "min_latency_us": 729.8327272727273, 00:30:26.846 "max_latency_us": 10902.807272727272 00:30:26.846 } 00:30:26.846 ], 00:30:26.846 "core_count": 1 00:30:26.846 } 00:30:26.846 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:26.846 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:26.846 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:26.846 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:26.846 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:26.846 | select(.opcode=="crc32c") 00:30:26.846 | "\(.module_name) \(.executed)"' 00:30:27.105 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:27.105 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:27.105 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:27.105 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:27.105 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 104477 00:30:27.105 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 104477 ']' 00:30:27.105 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 104477 00:30:27.105 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:27.105 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:27.105 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104477 00:30:27.105 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:27.105 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:27.105 killing process with pid 104477 00:30:27.105 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104477' 00:30:27.105 Received shutdown signal, test time was about 2.000000 seconds 00:30:27.105 00:30:27.105 Latency(us) 00:30:27.105 [2024-12-05T10:02:14.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.105 [2024-12-05T10:02:14.734Z] =================================================================================================================== 00:30:27.105 [2024-12-05T10:02:14.735Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:27.106 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 104477 00:30:27.106 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 104477 00:30:28.049 10:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:28.306 10:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:28.306 10:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:28.306 10:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:28.306 10:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:28.306 10:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:28.306 10:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:28.306 10:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=104581 00:30:28.306 10:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:28.306 10:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 104581 /var/tmp/bperf.sock 00:30:28.306 10:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 104581 ']' 00:30:28.306 10:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:28.306 10:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:28.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:28.306 10:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:28.307 10:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:28.307 10:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:28.307 [2024-12-05 10:02:15.794850] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:28.307 [2024-12-05 10:02:15.795047] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104581 ] 00:30:28.565 [2024-12-05 10:02:15.977357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.565 [2024-12-05 10:02:16.085395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.132 10:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:29.132 10:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:29.132 10:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:29.132 10:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:29.132 10:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:29.700 10:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:29.700 10:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:29.960 nvme0n1 00:30:30.219 10:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:30.219 10:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:30.219 Running I/O for 2 seconds... 00:30:32.531 21746.00 IOPS, 84.95 MiB/s [2024-12-05T10:02:20.160Z] 21839.00 IOPS, 85.31 MiB/s 00:30:32.531 Latency(us) 00:30:32.531 [2024-12-05T10:02:20.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.531 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:32.531 nvme0n1 : 2.01 21869.73 85.43 0.00 0.00 5844.45 3053.38 16801.05 00:30:32.531 [2024-12-05T10:02:20.160Z] =================================================================================================================== 00:30:32.531 [2024-12-05T10:02:20.160Z] Total : 21869.73 85.43 0.00 0.00 5844.45 3053.38 16801.05 00:30:32.531 { 00:30:32.531 "results": [ 00:30:32.531 { 00:30:32.531 "job": "nvme0n1", 00:30:32.531 "core_mask": "0x2", 00:30:32.531 "workload": "randwrite", 00:30:32.531 "status": "finished", 00:30:32.531 "queue_depth": 128, 00:30:32.531 "io_size": 4096, 00:30:32.531 "runtime": 2.005969, 00:30:32.531 "iops": 21869.729791437454, 00:30:32.531 "mibps": 85.42863199780255, 00:30:32.531 "io_failed": 0, 00:30:32.531 "io_timeout": 0, 00:30:32.531 "avg_latency_us": 5844.454122552169, 00:30:32.531 "min_latency_us": 3053.3818181818183, 00:30:32.531 "max_latency_us": 16801.04727272727 00:30:32.531 } 00:30:32.531 ], 00:30:32.531 "core_count": 1 00:30:32.531 } 00:30:32.531 10:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:32.531 10:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:32.531 10:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:32.531 10:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:32.531 10:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:32.531 | select(.opcode=="crc32c") 00:30:32.531 | "\(.module_name) \(.executed)"' 00:30:32.531 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:32.531 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:32.531 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:32.531 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:32.531 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 104581 00:30:32.531 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 104581 ']' 00:30:32.531 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 104581 00:30:32.531 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:32.531 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:32.531 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104581 00:30:32.531 killing process with pid 104581 00:30:32.531 Received shutdown signal, test time was about 2.000000 seconds 00:30:32.531 00:30:32.531 Latency(us) 00:30:32.531 [2024-12-05T10:02:20.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.531 [2024-12-05T10:02:20.160Z] =================================================================================================================== 00:30:32.531 [2024-12-05T10:02:20.160Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:32.531 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:32.531 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:32.531 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104581' 00:30:32.531 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 104581 00:30:32.531 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 104581 00:30:33.466 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:33.466 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:33.466 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:33.466 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:33.466 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:33.466 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:33.466 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:33.466 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=104678 00:30:33.466 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 104678 /var/tmp/bperf.sock 00:30:33.466 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:33.466 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 104678 ']' 00:30:33.466 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:33.466 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:33.466 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:33.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:33.466 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:33.466 10:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:33.466 [2024-12-05 10:02:20.945623] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:33.466 [2024-12-05 10:02:20.945824] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104678 ] 00:30:33.466 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:33.466 Zero copy mechanism will not be used. 00:30:33.724 [2024-12-05 10:02:21.126237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.724 [2024-12-05 10:02:21.235613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.325 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:34.326 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:34.326 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:34.326 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:34.326 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:34.891 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:34.891 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:35.148 nvme0n1 00:30:35.148 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:35.148 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:35.406 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:35.406 Zero copy mechanism will not be used. 00:30:35.406 Running I/O for 2 seconds... 00:30:37.275 4776.00 IOPS, 597.00 MiB/s [2024-12-05T10:02:24.904Z] 4810.50 IOPS, 601.31 MiB/s 00:30:37.275 Latency(us) 00:30:37.275 [2024-12-05T10:02:24.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.275 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:37.275 nvme0n1 : 2.01 4809.83 601.23 0.00 0.00 3319.12 2412.92 11558.17 00:30:37.275 [2024-12-05T10:02:24.904Z] =================================================================================================================== 00:30:37.275 [2024-12-05T10:02:24.904Z] Total : 4809.83 601.23 0.00 0.00 3319.12 2412.92 11558.17 00:30:37.275 { 00:30:37.275 "results": [ 00:30:37.275 { 00:30:37.275 "job": "nvme0n1", 00:30:37.275 "core_mask": "0x2", 00:30:37.275 "workload": "randwrite", 00:30:37.275 "status": "finished", 00:30:37.275 "queue_depth": 16, 00:30:37.275 "io_size": 131072, 00:30:37.275 "runtime": 2.005061, 00:30:37.275 "iops": 4809.8287284027765, 00:30:37.275 "mibps": 601.2285910503471, 00:30:37.275 "io_failed": 0, 00:30:37.275 "io_timeout": 0, 00:30:37.275 "avg_latency_us": 3319.1160152332113, 00:30:37.275 "min_latency_us": 2412.9163636363637, 00:30:37.275 "max_latency_us": 11558.167272727273 00:30:37.275 } 00:30:37.275 ], 00:30:37.275 "core_count": 1 00:30:37.275 } 00:30:37.275 10:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:37.275 10:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:37.275 10:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:37.275 10:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:37.275 10:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:37.275 | select(.opcode=="crc32c") 00:30:37.275 | "\(.module_name) \(.executed)"' 00:30:37.534 10:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:37.534 10:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:37.534 10:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:37.534 10:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:37.534 10:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 104678 00:30:37.534 10:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 104678 ']' 00:30:37.534 10:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 104678 00:30:37.534 10:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:37.534 10:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:37.534 10:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104678 00:30:37.534 killing process with pid 104678 00:30:37.534 Received shutdown signal, test time was about 2.000000 seconds 00:30:37.534 00:30:37.534 Latency(us) 00:30:37.534 [2024-12-05T10:02:25.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.534 [2024-12-05T10:02:25.163Z] =================================================================================================================== 00:30:37.534 [2024-12-05T10:02:25.163Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:37.534 10:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:37.534 10:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:37.534 10:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104678' 00:30:37.534 10:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 104678 00:30:37.534 10:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 104678 00:30:38.469 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 104330 00:30:38.469 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 104330 ']' 00:30:38.469 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 104330 00:30:38.469 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:38.727 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:38.727 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104330 00:30:38.727 killing process with pid 104330 00:30:38.727 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:38.727 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:38.727 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104330' 00:30:38.727 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 104330 00:30:38.727 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 104330 00:30:39.663 ************************************ 00:30:39.663 END TEST nvmf_digest_clean 00:30:39.663 ************************************ 00:30:39.663 00:30:39.663 real 0m23.229s 00:30:39.663 user 0m42.721s 00:30:39.663 sys 0m5.643s 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:39.663 ************************************ 00:30:39.663 START TEST nvmf_digest_error 00:30:39.663 ************************************ 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=104809 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 104809 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 104809 ']' 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:39.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:39.663 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:39.663 [2024-12-05 10:02:27.177229] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:39.663 [2024-12-05 10:02:27.177374] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.923 [2024-12-05 10:02:27.346437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.923 [2024-12-05 10:02:27.451861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.923 [2024-12-05 10:02:27.451926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.923 [2024-12-05 10:02:27.451963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.923 [2024-12-05 10:02:27.451993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.923 [2024-12-05 10:02:27.452009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.923 [2024-12-05 10:02:27.453087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.491 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:40.491 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:40.491 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:40.491 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:40.491 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:40.749 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.749 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:40.749 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.749 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:40.749 [2024-12-05 10:02:28.137900] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:40.749 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.749 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:40.749 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:40.749 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.749 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:41.009 null0 00:30:41.009 [2024-12-05 10:02:28.449109] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.009 [2024-12-05 10:02:28.473488] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:41.009 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.009 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:41.009 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:41.009 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:41.009 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:41.009 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:41.009 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=104853 00:30:41.009 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:41.009 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 104853 /var/tmp/bperf.sock 00:30:41.009 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 104853 ']' 00:30:41.009 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:41.009 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:41.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:41.009 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:41.009 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:41.009 10:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:41.009 [2024-12-05 10:02:28.597443] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:41.009 [2024-12-05 10:02:28.597626] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104853 ] 00:30:41.268 [2024-12-05 10:02:28.787291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.527 [2024-12-05 10:02:28.945449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.094 10:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:42.094 10:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:42.094 10:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:42.094 10:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:42.353 10:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:42.353 10:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.353 10:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:42.353 10:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.353 10:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:42.353 10:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:42.612 nvme0n1 00:30:42.612 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:42.612 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.612 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:42.612 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.612 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:42.612 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:42.870 Running I/O for 2 seconds... 00:30:42.870 [2024-12-05 10:02:30.271033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:42.870 [2024-12-05 10:02:30.271091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.870 [2024-12-05 10:02:30.271111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.870 [2024-12-05 10:02:30.283662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:42.870 [2024-12-05 10:02:30.283703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.870 [2024-12-05 10:02:30.283719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.870 [2024-12-05 10:02:30.298078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:42.870 [2024-12-05 10:02:30.298117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.870 [2024-12-05 10:02:30.298133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.870 [2024-12-05 10:02:30.312002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:42.870 [2024-12-05 10:02:30.312041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.870 [2024-12-05 10:02:30.312058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.870 [2024-12-05 10:02:30.326703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:42.870 [2024-12-05 10:02:30.326744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.870 [2024-12-05 10:02:30.326760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.870 [2024-12-05 10:02:30.341133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:42.870 [2024-12-05 10:02:30.341172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.870 [2024-12-05 10:02:30.341189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.870 [2024-12-05 10:02:30.354941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:42.870 [2024-12-05 10:02:30.354980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.870 [2024-12-05 10:02:30.354997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.870 [2024-12-05 10:02:30.369390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:42.870 [2024-12-05 10:02:30.369429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.870 [2024-12-05 10:02:30.369445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.870 [2024-12-05 10:02:30.382978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:42.870 [2024-12-05 10:02:30.383016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.870 [2024-12-05 10:02:30.383033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.870 [2024-12-05 10:02:30.396835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:42.870 [2024-12-05 10:02:30.396875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.870 [2024-12-05 10:02:30.396891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.870 [2024-12-05 10:02:30.410812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:42.870 [2024-12-05 10:02:30.410852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.870 [2024-12-05 10:02:30.410869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.871 [2024-12-05 10:02:30.424606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:42.871 [2024-12-05 10:02:30.424645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.871 [2024-12-05 10:02:30.424662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.871 [2024-12-05 10:02:30.438323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:42.871 [2024-12-05 10:02:30.438363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.871 [2024-12-05 10:02:30.438380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.871 [2024-12-05 10:02:30.452099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:42.871 [2024-12-05 10:02:30.452138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.871 [2024-12-05 10:02:30.452154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.871 [2024-12-05 10:02:30.465846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:42.871 [2024-12-05 10:02:30.465886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.871 [2024-12-05 10:02:30.465903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.871 [2024-12-05 10:02:30.479947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:42.871 [2024-12-05 10:02:30.479985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.871 [2024-12-05 10:02:30.480001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.871 [2024-12-05 10:02:30.493781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:42.871 [2024-12-05 10:02:30.493821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.871 [2024-12-05 10:02:30.493838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.134 [2024-12-05 10:02:30.508242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.134 [2024-12-05 10:02:30.508282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.134 [2024-12-05 10:02:30.508298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.134 [2024-12-05 10:02:30.522492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.134 [2024-12-05 10:02:30.522532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.134 [2024-12-05 10:02:30.522548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.134 [2024-12-05 10:02:30.535324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.134 [2024-12-05 10:02:30.535365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.134 [2024-12-05 10:02:30.535381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.134 [2024-12-05 10:02:30.549974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.134 [2024-12-05 10:02:30.550013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.134 [2024-12-05 10:02:30.550029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.134 [2024-12-05 10:02:30.564146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.134 [2024-12-05 10:02:30.564186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.134 [2024-12-05 10:02:30.564203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.134 [2024-12-05 10:02:30.578266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.134 [2024-12-05 10:02:30.578344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.134 [2024-12-05 10:02:30.578362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.134 [2024-12-05 10:02:30.592385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.134 [2024-12-05 10:02:30.592425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.134 [2024-12-05 10:02:30.592441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.134 [2024-12-05 10:02:30.606167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.134 [2024-12-05 10:02:30.606206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.134 [2024-12-05 10:02:30.606223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.134 [2024-12-05 10:02:30.620549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.134 [2024-12-05 10:02:30.620588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.134 [2024-12-05 10:02:30.620605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.134 [2024-12-05 10:02:30.634390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.134 [2024-12-05 10:02:30.634431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.134 [2024-12-05 10:02:30.634448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.134 [2024-12-05 10:02:30.648213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.134 [2024-12-05 10:02:30.648252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.134 [2024-12-05 10:02:30.648268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.134 [2024-12-05 10:02:30.662804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.134 [2024-12-05 10:02:30.662845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.134 [2024-12-05 10:02:30.662860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.134 [2024-12-05 10:02:30.676468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.134 [2024-12-05 10:02:30.676507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.134 [2024-12-05 10:02:30.676523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.135 [2024-12-05 10:02:30.689240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.135 [2024-12-05 10:02:30.689279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.135 [2024-12-05 10:02:30.689310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.135 [2024-12-05 10:02:30.704947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.135 [2024-12-05 10:02:30.704986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.135 [2024-12-05 10:02:30.705002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.135 [2024-12-05 10:02:30.718868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.135 [2024-12-05 10:02:30.718907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.135 [2024-12-05 10:02:30.718924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.135 [2024-12-05 10:02:30.732893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.135 [2024-12-05 10:02:30.732943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.135 [2024-12-05 10:02:30.732962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.135 [2024-12-05 10:02:30.745776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.135 [2024-12-05 10:02:30.745816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.135 [2024-12-05 10:02:30.745832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.414 [2024-12-05 10:02:30.761839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.414 [2024-12-05 10:02:30.761897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.414 [2024-12-05 10:02:30.761917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.414 [2024-12-05 10:02:30.777530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.414 [2024-12-05 10:02:30.777575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.414 [2024-12-05 10:02:30.777594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.414 [2024-12-05 10:02:30.792484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.414 [2024-12-05 10:02:30.792526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.414 [2024-12-05 10:02:30.792543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.414 [2024-12-05 10:02:30.806516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.414 [2024-12-05 10:02:30.806560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.414 [2024-12-05 10:02:30.806578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.414 [2024-12-05 10:02:30.820559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.414 [2024-12-05 10:02:30.820599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.414 [2024-12-05 10:02:30.820615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.414 [2024-12-05 10:02:30.832489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.414 [2024-12-05 10:02:30.832529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.414 [2024-12-05 10:02:30.832545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.414 [2024-12-05 10:02:30.846980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.414 [2024-12-05 10:02:30.847032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.414 [2024-12-05 10:02:30.847048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.414 [2024-12-05 10:02:30.861197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.414 [2024-12-05 10:02:30.861236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.414 [2024-12-05 10:02:30.861252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.414 [2024-12-05 10:02:30.875018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.414 [2024-12-05 10:02:30.875055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.414 [2024-12-05 10:02:30.875072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.414 [2024-12-05 10:02:30.889559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.414 [2024-12-05 10:02:30.889599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.414 [2024-12-05 10:02:30.889615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.414 [2024-12-05 10:02:30.904021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.414 [2024-12-05 10:02:30.904061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.414 [2024-12-05 10:02:30.904078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.414 [2024-12-05 10:02:30.920025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.414 [2024-12-05 10:02:30.920077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.414 [2024-12-05 10:02:30.920094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.414 [2024-12-05 10:02:30.935226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.414 [2024-12-05 10:02:30.935267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.414 [2024-12-05 10:02:30.935283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.414 [2024-12-05 10:02:30.949429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.414 [2024-12-05 10:02:30.949470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.414 [2024-12-05 10:02:30.949486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.414 [2024-12-05 10:02:30.963224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.414 [2024-12-05 10:02:30.963264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.414 [2024-12-05 10:02:30.963281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.414 [2024-12-05 10:02:30.977686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.414 [2024-12-05 10:02:30.977727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.414 [2024-12-05 10:02:30.977743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.414 [2024-12-05 10:02:30.992771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.415 [2024-12-05 10:02:30.992823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.415 [2024-12-05 10:02:30.992839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.415 [2024-12-05 10:02:31.006634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.415 [2024-12-05 10:02:31.006674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.415 [2024-12-05 10:02:31.006691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.415 [2024-12-05 10:02:31.022770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.415 [2024-12-05 10:02:31.022823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.415 [2024-12-05 10:02:31.022840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 [2024-12-05 10:02:31.037583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.037626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.037644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 [2024-12-05 10:02:31.052727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.052782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.052800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 [2024-12-05 10:02:31.065956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.066008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.066024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 [2024-12-05 10:02:31.080515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.080567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.080583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 [2024-12-05 10:02:31.094997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.095046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.095063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 [2024-12-05 10:02:31.109031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.109071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.109088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 [2024-12-05 10:02:31.123416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.123456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.123472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 [2024-12-05 10:02:31.138035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.138074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.138090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 [2024-12-05 10:02:31.152826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.152866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.152882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 [2024-12-05 10:02:31.168060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.168112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.168129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 [2024-12-05 10:02:31.182458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.182499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.182515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 [2024-12-05 10:02:31.197219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.197269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.197286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 [2024-12-05 10:02:31.209233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.209272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.209289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 [2024-12-05 10:02:31.224297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.224338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.224354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 [2024-12-05 10:02:31.238768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.238822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.238839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 17712.00 IOPS, 69.19 MiB/s [2024-12-05T10:02:31.318Z] [2024-12-05 10:02:31.254398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.254442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.254458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 [2024-12-05 10:02:31.268034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.268086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.268102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 [2024-12-05 10:02:31.283758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.283799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.283815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 [2024-12-05 10:02:31.298309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.298349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.298366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.689 [2024-12-05 10:02:31.313170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.689 [2024-12-05 10:02:31.313223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.689 [2024-12-05 10:02:31.313239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.947 [2024-12-05 10:02:31.328630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.947 [2024-12-05 10:02:31.328670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.947 [2024-12-05 10:02:31.328686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.947 [2024-12-05 10:02:31.343531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.947 [2024-12-05 10:02:31.343572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.947 [2024-12-05 10:02:31.343588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.947 [2024-12-05 10:02:31.357202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.947 [2024-12-05 10:02:31.357242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.947 [2024-12-05 10:02:31.357259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.947 [2024-12-05 10:02:31.370956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.947 [2024-12-05 10:02:31.370995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.947 [2024-12-05 10:02:31.371011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.947 [2024-12-05 10:02:31.385098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.947 [2024-12-05 10:02:31.385139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.948 [2024-12-05 10:02:31.385155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.948 [2024-12-05 10:02:31.399156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.948 [2024-12-05 10:02:31.399196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.948 [2024-12-05 10:02:31.399212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.948 [2024-12-05 10:02:31.411494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.948 [2024-12-05 10:02:31.411534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.948 [2024-12-05 10:02:31.411550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.948 [2024-12-05 10:02:31.425399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.948 [2024-12-05 10:02:31.425439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.948 [2024-12-05 10:02:31.425455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.948 [2024-12-05 10:02:31.439420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.948 [2024-12-05 10:02:31.439459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.948 [2024-12-05 10:02:31.439475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.948 [2024-12-05 10:02:31.453209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.948 [2024-12-05 10:02:31.453249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.948 [2024-12-05 10:02:31.453265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.948 [2024-12-05 10:02:31.466904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.948 [2024-12-05 10:02:31.466954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.948 [2024-12-05 10:02:31.466971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.948 [2024-12-05 10:02:31.480592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.948 [2024-12-05 10:02:31.480631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.948 [2024-12-05 10:02:31.480648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.948 [2024-12-05 10:02:31.494360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.948 [2024-12-05 10:02:31.494400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.948 [2024-12-05 10:02:31.494416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.948 [2024-12-05 10:02:31.508248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.948 [2024-12-05 10:02:31.508288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.948 [2024-12-05 10:02:31.508304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.948 [2024-12-05 10:02:31.522427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.948 [2024-12-05 10:02:31.522468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.948 [2024-12-05 10:02:31.522485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.948 [2024-12-05 10:02:31.536153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.948 [2024-12-05 10:02:31.536192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.948 [2024-12-05 10:02:31.536208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.948 [2024-12-05 10:02:31.552172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.948 [2024-12-05 10:02:31.552212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.948 [2024-12-05 10:02:31.552228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.948 [2024-12-05 10:02:31.565980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:43.948 [2024-12-05 10:02:31.566019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.948 [2024-12-05 10:02:31.566035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.207 [2024-12-05 10:02:31.579969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.207 [2024-12-05 10:02:31.580019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.207 [2024-12-05 10:02:31.580036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.207 [2024-12-05 10:02:31.593855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.207 [2024-12-05 10:02:31.593895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.207 [2024-12-05 10:02:31.593911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.207 [2024-12-05 10:02:31.607734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.207 [2024-12-05 10:02:31.607774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.207 [2024-12-05 10:02:31.607790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.207 [2024-12-05 10:02:31.621440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.207 [2024-12-05 10:02:31.621480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.207 [2024-12-05 10:02:31.621496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.207 [2024-12-05 10:02:31.635237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.207 [2024-12-05 10:02:31.635277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.207 [2024-12-05 10:02:31.635294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.207 [2024-12-05 10:02:31.648947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.207 [2024-12-05 10:02:31.648988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.207 [2024-12-05 10:02:31.649004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.207 [2024-12-05 10:02:31.662772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.207 [2024-12-05 10:02:31.662823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.207 [2024-12-05 10:02:31.662839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.207 [2024-12-05 10:02:31.676530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.207 [2024-12-05 10:02:31.676570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.207 [2024-12-05 10:02:31.676587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.207 [2024-12-05 10:02:31.690772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.207 [2024-12-05 10:02:31.690812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.207 [2024-12-05 10:02:31.690828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.207 [2024-12-05 10:02:31.703917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.207 [2024-12-05 10:02:31.703966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.207 [2024-12-05 10:02:31.703983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.207 [2024-12-05 10:02:31.718788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.207 [2024-12-05 10:02:31.718827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.207 [2024-12-05 10:02:31.718843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.207 [2024-12-05 10:02:31.730988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.207 [2024-12-05 10:02:31.731027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.207 [2024-12-05 10:02:31.731043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.207 [2024-12-05 10:02:31.746160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.207 [2024-12-05 10:02:31.746201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.207 [2024-12-05 10:02:31.746217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.207 [2024-12-05 10:02:31.760632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.207 [2024-12-05 10:02:31.760672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.207 [2024-12-05 10:02:31.760688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.207 [2024-12-05 10:02:31.774661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.207 [2024-12-05 10:02:31.774701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.207 [2024-12-05 10:02:31.774718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.207 [2024-12-05 10:02:31.788473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.207 [2024-12-05 10:02:31.788513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.207 [2024-12-05 10:02:31.788530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.207 [2024-12-05 10:02:31.802079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.207 [2024-12-05 10:02:31.802119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.207 [2024-12-05 10:02:31.802135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.207 [2024-12-05 10:02:31.816893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.207 [2024-12-05 10:02:31.816943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.207 [2024-12-05 10:02:31.816962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.207 [2024-12-05 10:02:31.831575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.207 [2024-12-05 10:02:31.831629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.207 [2024-12-05 10:02:31.831662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.465 [2024-12-05 10:02:31.846257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.465 [2024-12-05 10:02:31.846321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.465 [2024-12-05 10:02:31.846355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.465 [2024-12-05 10:02:31.861954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.465 [2024-12-05 10:02:31.861992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.465 [2024-12-05 10:02:31.862008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.465 [2024-12-05 10:02:31.875729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.465 [2024-12-05 10:02:31.875769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.465 [2024-12-05 10:02:31.875786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.465 [2024-12-05 10:02:31.887704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.465 [2024-12-05 10:02:31.887751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.465 [2024-12-05 10:02:31.887768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.465 [2024-12-05 10:02:31.902445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.465 [2024-12-05 10:02:31.902492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.465 [2024-12-05 10:02:31.902511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.465 [2024-12-05 10:02:31.916386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.465 [2024-12-05 10:02:31.916430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.465 [2024-12-05 10:02:31.916456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.465 [2024-12-05 10:02:31.930796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.465 [2024-12-05 10:02:31.930842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.465 [2024-12-05 10:02:31.930860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.465 [2024-12-05 10:02:31.946686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.465 [2024-12-05 10:02:31.946733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.465 [2024-12-05 10:02:31.946751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.465 [2024-12-05 10:02:31.959604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.465 [2024-12-05 10:02:31.959769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.465 [2024-12-05 10:02:31.959790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.465 [2024-12-05 10:02:31.974719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.465 [2024-12-05 10:02:31.974872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.465 [2024-12-05 10:02:31.974895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.465 [2024-12-05 10:02:31.989363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.465 [2024-12-05 10:02:31.989408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.465 [2024-12-05 10:02:31.989425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.465 [2024-12-05 10:02:32.003719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.465 [2024-12-05 10:02:32.003765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.465 [2024-12-05 10:02:32.003783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.465 [2024-12-05 10:02:32.017648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.465 [2024-12-05 10:02:32.017694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.465 [2024-12-05 10:02:32.017712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.465 [2024-12-05 10:02:32.031578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.465 [2024-12-05 10:02:32.031622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.465 [2024-12-05 10:02:32.031640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.465 [2024-12-05 10:02:32.045676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.465 [2024-12-05 10:02:32.045721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.465 [2024-12-05 10:02:32.045738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.465 [2024-12-05 10:02:32.059487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.465 [2024-12-05 10:02:32.059531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.465 [2024-12-05 10:02:32.059548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.465 [2024-12-05 10:02:32.074066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.465 [2024-12-05 10:02:32.074220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.465 [2024-12-05 10:02:32.074242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.465 [2024-12-05 10:02:32.087509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.465 [2024-12-05 10:02:32.087554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.465 [2024-12-05 10:02:32.087573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.722 [2024-12-05 10:02:32.103420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.723 [2024-12-05 10:02:32.103466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.723 [2024-12-05 10:02:32.103483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.723 [2024-12-05 10:02:32.117289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.723 [2024-12-05 10:02:32.117439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.723 [2024-12-05 10:02:32.117461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.723 [2024-12-05 10:02:32.131494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.723 [2024-12-05 10:02:32.131539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.723 [2024-12-05 10:02:32.131556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.723 [2024-12-05 10:02:32.145951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.723 [2024-12-05 10:02:32.145994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.723 [2024-12-05 10:02:32.146011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.723 [2024-12-05 10:02:32.159708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.723 [2024-12-05 10:02:32.159754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.723 [2024-12-05 10:02:32.159771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.723 [2024-12-05 10:02:32.173432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.723 [2024-12-05 10:02:32.173478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.723 [2024-12-05 10:02:32.173496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.723 [2024-12-05 10:02:32.187208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.723 [2024-12-05 10:02:32.187253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.723 [2024-12-05 10:02:32.187270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.723 [2024-12-05 10:02:32.200921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.723 [2024-12-05 10:02:32.200977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.723 [2024-12-05 10:02:32.200994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.723 [2024-12-05 10:02:32.214874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.723 [2024-12-05 10:02:32.214919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.723 [2024-12-05 10:02:32.214952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.723 [2024-12-05 10:02:32.228580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.723 [2024-12-05 10:02:32.228763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.723 [2024-12-05 10:02:32.228786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.723 [2024-12-05 10:02:32.243238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:44.723 [2024-12-05 10:02:32.243408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.723 [2024-12-05 10:02:32.243431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.723 17874.50 IOPS, 69.82 MiB/s 00:30:44.723 Latency(us) 00:30:44.723 [2024-12-05T10:02:32.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.723 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:44.723 nvme0n1 : 2.01 17870.64 69.81 0.00 0.00 7149.95 4051.32 18588.39 00:30:44.723 [2024-12-05T10:02:32.352Z] =================================================================================================================== 00:30:44.723 [2024-12-05T10:02:32.352Z] Total : 17870.64 69.81 0.00 0.00 7149.95 4051.32 18588.39 00:30:44.723 { 00:30:44.723 "results": [ 00:30:44.723 { 00:30:44.723 "job": "nvme0n1", 00:30:44.723 "core_mask": "0x2", 00:30:44.723 "workload": "randread", 00:30:44.723 "status": "finished", 00:30:44.723 "queue_depth": 128, 00:30:44.723 "io_size": 4096, 00:30:44.723 "runtime": 2.007595, 00:30:44.723 "iops": 17870.63625880718, 00:30:44.723 "mibps": 69.80717288596554, 00:30:44.723 "io_failed": 0, 00:30:44.723 "io_timeout": 0, 00:30:44.723 "avg_latency_us": 7149.953487749812, 00:30:44.723 "min_latency_us": 4051.316363636364, 00:30:44.723 "max_latency_us": 18588.392727272727 00:30:44.723 } 00:30:44.723 ], 00:30:44.723 "core_count": 1 00:30:44.723 } 00:30:44.723 10:02:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:44.723 10:02:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:44.723 10:02:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:44.723 10:02:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:44.723 | .driver_specific 00:30:44.723 | .nvme_error 00:30:44.723 | .status_code 00:30:44.723 | .command_transient_transport_error' 00:30:44.981 10:02:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 140 > 0 )) 00:30:44.981 10:02:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 104853 00:30:44.981 10:02:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 104853 ']' 00:30:44.981 10:02:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 104853 00:30:44.981 10:02:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:44.981 10:02:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:44.981 10:02:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104853 00:30:44.981 10:02:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:44.981 10:02:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:44.981 10:02:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104853' 00:30:44.981 killing process with pid 104853 00:30:44.981 Received shutdown signal, test time was about 2.000000 seconds 00:30:44.981 00:30:44.981 Latency(us) 00:30:44.981 [2024-12-05T10:02:32.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.981 [2024-12-05T10:02:32.610Z] =================================================================================================================== 00:30:44.981 [2024-12-05T10:02:32.610Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:44.982 10:02:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 104853 00:30:44.982 10:02:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 104853 00:30:45.916 10:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:45.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:45.916 10:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:45.916 10:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:45.916 10:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:45.916 10:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:45.916 10:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=104946 00:30:45.916 10:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 104946 /var/tmp/bperf.sock 00:30:45.916 10:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:45.916 10:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 104946 ']' 00:30:45.916 10:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:45.916 10:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.916 10:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:45.916 10:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.916 10:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:45.916 [2024-12-05 10:02:33.443887] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:45.916 [2024-12-05 10:02:33.444408] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104946 ] 00:30:45.916 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:45.916 Zero copy mechanism will not be used. 00:30:46.174 [2024-12-05 10:02:33.624767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.174 [2024-12-05 10:02:33.737113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.741 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:46.741 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:46.741 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:46.741 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:47.000 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:47.000 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.001 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:47.001 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.001 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:47.001 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:47.265 nvme0n1 00:30:47.265 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:47.265 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.265 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:47.265 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.265 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:47.265 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:47.526 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:47.526 Zero copy mechanism will not be used. 00:30:47.526 Running I/O for 2 seconds... 00:30:47.526 [2024-12-05 10:02:34.947917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:34.947993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:34.948013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:34.952774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:34.952819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:34.952837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:34.956719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:34.956759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:34.956775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:34.961568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:34.961611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:34.961641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:34.966058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:34.966114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:34.966131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:34.970913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:34.970969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:34.970986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:34.975132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:34.975175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:34.975192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:34.979660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:34.979715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:34.979732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:34.984283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:34.984336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:34.984359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:34.989691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:34.989747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:34.989763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:34.995706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:34.995762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:34.995780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:35.000228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:35.000270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:35.000295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:35.005242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:35.005286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:35.005306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:35.009655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:35.009710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:35.009726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:35.013999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:35.014040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:35.014057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:35.017974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:35.018028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:35.018044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:35.021914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:35.021981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:35.021999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:35.027070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:35.027125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:35.027141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:35.030320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:35.030361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:35.030377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:35.035122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:35.035177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:35.035194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:35.039337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:35.039391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:35.039407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.526 [2024-12-05 10:02:35.043641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.526 [2024-12-05 10:02:35.043696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.526 [2024-12-05 10:02:35.043712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.047884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.047950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.047970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.051979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.052032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.052048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.056112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.056152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.056168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.060237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.060291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.060307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.064032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.064086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.064101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.068621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.068676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.068692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.072734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.072788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.072805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.077179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.077233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.077250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.082108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.082163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.082195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.085369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.085423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.085439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.090303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.090345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.090361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.094488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.094531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.094546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.098547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.098601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.098616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.102917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.102982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.102999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.107620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.107675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.107692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.111265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.111318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.111335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.115585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.115641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.115657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.120653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.120707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.120723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.124481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.124537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.124554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.129414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.129470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.129487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.134329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.134370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.134387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.137735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.137789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.137805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.143197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.143251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.143268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.148197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.148253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.148270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.527 [2024-12-05 10:02:35.151948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.527 [2024-12-05 10:02:35.152004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.527 [2024-12-05 10:02:35.152021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.156980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.157034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.157050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.162367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.162425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.162442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.166080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.166132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.166148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.170574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.170646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.170678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.175723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.175778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.175794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.179639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.179693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.179709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.183913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.183976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.183994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.188970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.189024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.189040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.194177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.194231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.194247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.197822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.197877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.197892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.202481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.202536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.202553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.207734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.207789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.207805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.212462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.212517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.212534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.216138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.216193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.216209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.220602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.220656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.220672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.224476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.224531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.224548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.228660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.228714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.228730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.232743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.232799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.232815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.237392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.237445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.237462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.241217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.788 [2024-12-05 10:02:35.241271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.788 [2024-12-05 10:02:35.241286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.788 [2024-12-05 10:02:35.245497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.245552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.245568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.250035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.250089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.250105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.253810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.253865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.253881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.257828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.257882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.257899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.261940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.261993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.262009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.265883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.265948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.265966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.270189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.270255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.270271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.274177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.274232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.274248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.278761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.278816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.278831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.282544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.282598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.282615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.286459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.286501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.286516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.290797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.290838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.290854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.294789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.294844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.294860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.298747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.298802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.298818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.303121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.303174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.303192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.307449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.307491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.307513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.311511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.311553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.311573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.315585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.315628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.315650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.319955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.320016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.320033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.324047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.324101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.324117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.328010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.328064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.328080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.331740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.331782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.331804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.336166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.336208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.336225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.339704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.339745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.339769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.344576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.344619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.344642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.348946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.348986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.349009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.352803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.789 [2024-12-05 10:02:35.352844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.789 [2024-12-05 10:02:35.352870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.789 [2024-12-05 10:02:35.356883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.790 [2024-12-05 10:02:35.356925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.790 [2024-12-05 10:02:35.356968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.790 [2024-12-05 10:02:35.361502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.790 [2024-12-05 10:02:35.361544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.790 [2024-12-05 10:02:35.361567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.790 [2024-12-05 10:02:35.365388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.790 [2024-12-05 10:02:35.365430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.790 [2024-12-05 10:02:35.365455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.790 [2024-12-05 10:02:35.369962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.790 [2024-12-05 10:02:35.370002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.790 [2024-12-05 10:02:35.370025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.790 [2024-12-05 10:02:35.374283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.790 [2024-12-05 10:02:35.374332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.790 [2024-12-05 10:02:35.374350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.790 [2024-12-05 10:02:35.378743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.790 [2024-12-05 10:02:35.378786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.790 [2024-12-05 10:02:35.378808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.790 [2024-12-05 10:02:35.382576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.790 [2024-12-05 10:02:35.382650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.790 [2024-12-05 10:02:35.382666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.790 [2024-12-05 10:02:35.386732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.790 [2024-12-05 10:02:35.386773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.790 [2024-12-05 10:02:35.386795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.790 [2024-12-05 10:02:35.391416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.790 [2024-12-05 10:02:35.391458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.790 [2024-12-05 10:02:35.391484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.790 [2024-12-05 10:02:35.395274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.790 [2024-12-05 10:02:35.395315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.790 [2024-12-05 10:02:35.395331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.790 [2024-12-05 10:02:35.399709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.790 [2024-12-05 10:02:35.399751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.790 [2024-12-05 10:02:35.399775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.790 [2024-12-05 10:02:35.404957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.790 [2024-12-05 10:02:35.405010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.790 [2024-12-05 10:02:35.405041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.790 [2024-12-05 10:02:35.408615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.790 [2024-12-05 10:02:35.408656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.790 [2024-12-05 10:02:35.408680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.790 [2024-12-05 10:02:35.413471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:47.790 [2024-12-05 10:02:35.413531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.790 [2024-12-05 10:02:35.413550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.417231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.417273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.417298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.422372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.422430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.422448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.426763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.426805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.426827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.430732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.430774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.430798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.434776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.434817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.434833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.439559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.439602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.439617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.443024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.443063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.443088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.447615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.447657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.447678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.451652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.451694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.451716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.455944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.455984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.456000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.460235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.460279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.460307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.464681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.464724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.464758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.468537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.468579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.468594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.473009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.473050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.473073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.476834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.476876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.476898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.481257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.481299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.481323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.485405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.485447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.485471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.489379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.489422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.489446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.493450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.493491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.493514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.497218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.497259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.497282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.501747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.501788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.050 [2024-12-05 10:02:35.501804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.050 [2024-12-05 10:02:35.505789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.050 [2024-12-05 10:02:35.505831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.505854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.509998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.510038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.510060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.514370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.514411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.514427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.518397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.518439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.518455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.522239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.522292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.522315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.526802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.526844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.526865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.531284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.531325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.531344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.535218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.535269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.535285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.539630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.539672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.539692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.543816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.543857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.543873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.547704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.547746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.547765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.551874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.551915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.551948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.555979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.556021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.556041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.559831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.559873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.559894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.564802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.564843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.564864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.568151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.568192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.568215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.572971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.573012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.573033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.577607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.577649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.577664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.581818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.581860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.581883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.585715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.585757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.585773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.589574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.589616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.589635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.593913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.593966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.593987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.597813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.597855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.597872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.602304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.602362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.602378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.606368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.606415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.606433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.610483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.610538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.610555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.614750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.051 [2024-12-05 10:02:35.614793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.051 [2024-12-05 10:02:35.614814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.051 [2024-12-05 10:02:35.618563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.052 [2024-12-05 10:02:35.618616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.052 [2024-12-05 10:02:35.618631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.052 [2024-12-05 10:02:35.623505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.052 [2024-12-05 10:02:35.623547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.052 [2024-12-05 10:02:35.623567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.052 [2024-12-05 10:02:35.627137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.052 [2024-12-05 10:02:35.627178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.052 [2024-12-05 10:02:35.627209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.052 [2024-12-05 10:02:35.631639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.052 [2024-12-05 10:02:35.631680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.052 [2024-12-05 10:02:35.631703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.052 [2024-12-05 10:02:35.636766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.052 [2024-12-05 10:02:35.636808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.052 [2024-12-05 10:02:35.636828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.052 [2024-12-05 10:02:35.641816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.052 [2024-12-05 10:02:35.641858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.052 [2024-12-05 10:02:35.641881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.052 [2024-12-05 10:02:35.645439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.052 [2024-12-05 10:02:35.645480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.052 [2024-12-05 10:02:35.645500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.052 [2024-12-05 10:02:35.650119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.052 [2024-12-05 10:02:35.650161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.052 [2024-12-05 10:02:35.650177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.052 [2024-12-05 10:02:35.654194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.052 [2024-12-05 10:02:35.654235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.052 [2024-12-05 10:02:35.654255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.052 [2024-12-05 10:02:35.658145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.052 [2024-12-05 10:02:35.658186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.052 [2024-12-05 10:02:35.658207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.052 [2024-12-05 10:02:35.661776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.052 [2024-12-05 10:02:35.661816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.052 [2024-12-05 10:02:35.661839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.052 [2024-12-05 10:02:35.666493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.052 [2024-12-05 10:02:35.666537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.052 [2024-12-05 10:02:35.666554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.052 [2024-12-05 10:02:35.670389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.052 [2024-12-05 10:02:35.670459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.052 [2024-12-05 10:02:35.670475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.052 [2024-12-05 10:02:35.674849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.052 [2024-12-05 10:02:35.674892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.052 [2024-12-05 10:02:35.674913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.312 [2024-12-05 10:02:35.679229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.312 [2024-12-05 10:02:35.679271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.312 [2024-12-05 10:02:35.679293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.312 [2024-12-05 10:02:35.683980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.312 [2024-12-05 10:02:35.684031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.312 [2024-12-05 10:02:35.684058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.312 [2024-12-05 10:02:35.687672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.312 [2024-12-05 10:02:35.687713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.312 [2024-12-05 10:02:35.687735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.312 [2024-12-05 10:02:35.692269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.312 [2024-12-05 10:02:35.692310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.312 [2024-12-05 10:02:35.692330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.312 [2024-12-05 10:02:35.697853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.312 [2024-12-05 10:02:35.697895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.312 [2024-12-05 10:02:35.697916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.312 [2024-12-05 10:02:35.702346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.312 [2024-12-05 10:02:35.702400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.312 [2024-12-05 10:02:35.702416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.312 [2024-12-05 10:02:35.706119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.312 [2024-12-05 10:02:35.706160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.312 [2024-12-05 10:02:35.706176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.312 [2024-12-05 10:02:35.711552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.312 [2024-12-05 10:02:35.711593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.312 [2024-12-05 10:02:35.711614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.312 [2024-12-05 10:02:35.716546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.312 [2024-12-05 10:02:35.716587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.312 [2024-12-05 10:02:35.716607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.312 [2024-12-05 10:02:35.720603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.312 [2024-12-05 10:02:35.720644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.312 [2024-12-05 10:02:35.720664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.312 [2024-12-05 10:02:35.725408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.312 [2024-12-05 10:02:35.725461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.312 [2024-12-05 10:02:35.725480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.312 [2024-12-05 10:02:35.729336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.312 [2024-12-05 10:02:35.729376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.312 [2024-12-05 10:02:35.729392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.312 [2024-12-05 10:02:35.733504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.312 [2024-12-05 10:02:35.733557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.312 [2024-12-05 10:02:35.733576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.312 [2024-12-05 10:02:35.737842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.312 [2024-12-05 10:02:35.737896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.312 [2024-12-05 10:02:35.737915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.312 [2024-12-05 10:02:35.742643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.312 [2024-12-05 10:02:35.742699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.312 [2024-12-05 10:02:35.742735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.312 [2024-12-05 10:02:35.746693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.312 [2024-12-05 10:02:35.746762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.312 [2024-12-05 10:02:35.746779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.312 [2024-12-05 10:02:35.750975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.751042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.751061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.754553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.754596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.754613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.759192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.759234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.759255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.763549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.763590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.763610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.767101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.767142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.767162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.771689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.771730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.771753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.777153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.777208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.777224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.780871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.780913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.780939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.785319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.785361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.785377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.790263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.790312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.790345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.794697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.794738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.794754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.798493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.798535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.798552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.802749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.802792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.802815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.807455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.807497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.807516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.810816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.810857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.810879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.815699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.815754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.815770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.820626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.820668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.820691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.824381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.824422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.824438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.828268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.828309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.828333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.832790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.832831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.832852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.837581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.837622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.837643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.841209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.841251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.841273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.845190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.845231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.845251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.849276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.849316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.849338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.853416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.853457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.853472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.857325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.857367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.857387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.861476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.313 [2024-12-05 10:02:35.861517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.313 [2024-12-05 10:02:35.861537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.313 [2024-12-05 10:02:35.865581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.314 [2024-12-05 10:02:35.865635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.314 [2024-12-05 10:02:35.865654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.314 [2024-12-05 10:02:35.869560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.314 [2024-12-05 10:02:35.869601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.314 [2024-12-05 10:02:35.869617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.314 [2024-12-05 10:02:35.873778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.314 [2024-12-05 10:02:35.873819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.314 [2024-12-05 10:02:35.873835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.314 [2024-12-05 10:02:35.877398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.314 [2024-12-05 10:02:35.877440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.314 [2024-12-05 10:02:35.877465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.314 [2024-12-05 10:02:35.881344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.314 [2024-12-05 10:02:35.881384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.314 [2024-12-05 10:02:35.881400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.314 [2024-12-05 10:02:35.885491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.314 [2024-12-05 10:02:35.885532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.314 [2024-12-05 10:02:35.885554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.314 [2024-12-05 10:02:35.889650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.314 [2024-12-05 10:02:35.889692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.314 [2024-12-05 10:02:35.889708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.314 [2024-12-05 10:02:35.893605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.314 [2024-12-05 10:02:35.893647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.314 [2024-12-05 10:02:35.893663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.314 [2024-12-05 10:02:35.898266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.314 [2024-12-05 10:02:35.898316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.314 [2024-12-05 10:02:35.898333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.314 [2024-12-05 10:02:35.902310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.314 [2024-12-05 10:02:35.902352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.314 [2024-12-05 10:02:35.902368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.314 [2024-12-05 10:02:35.906305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.314 [2024-12-05 10:02:35.906345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.314 [2024-12-05 10:02:35.906361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.314 [2024-12-05 10:02:35.910507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.314 [2024-12-05 10:02:35.910549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.314 [2024-12-05 10:02:35.910566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.314 [2024-12-05 10:02:35.914254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.314 [2024-12-05 10:02:35.914316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.314 [2024-12-05 10:02:35.914333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.314 [2024-12-05 10:02:35.918489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.314 [2024-12-05 10:02:35.918532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.314 [2024-12-05 10:02:35.918550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.314 [2024-12-05 10:02:35.922663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.314 [2024-12-05 10:02:35.922703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.314 [2024-12-05 10:02:35.922722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.314 [2024-12-05 10:02:35.926341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.314 [2024-12-05 10:02:35.926383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.314 [2024-12-05 10:02:35.926399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.314 [2024-12-05 10:02:35.930073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.314 [2024-12-05 10:02:35.930113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.314 [2024-12-05 10:02:35.930134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.314 [2024-12-05 10:02:35.934081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.314 [2024-12-05 10:02:35.934121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.314 [2024-12-05 10:02:35.934142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.314 [2024-12-05 10:02:35.938380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.314 [2024-12-05 10:02:35.938434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.314 [2024-12-05 10:02:35.938450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.575 7171.00 IOPS, 896.38 MiB/s [2024-12-05T10:02:36.204Z] [2024-12-05 10:02:35.944919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:35.944987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:35.945005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:35.948797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:35.948838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:35.948859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:35.953339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:35.953380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:35.953402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:35.957181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:35.957235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:35.957251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:35.961825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:35.961866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:35.961890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:35.966754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:35.966796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:35.966815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:35.970285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:35.970336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:35.970353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:35.974565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:35.974609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:35.974626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:35.979581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:35.979622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:35.979643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:35.984689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:35.984730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:35.984750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:35.989759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:35.989801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:35.989817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:35.993086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:35.993139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:35.993157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:35.997709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:35.997750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:35.997770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:36.002964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:36.003005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:36.003024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:36.006952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:36.006993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:36.007009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:36.011382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:36.011424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:36.011441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:36.015925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:36.015990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:36.016008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:36.021461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:36.021515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:36.021532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:36.026522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:36.026577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:36.026594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:36.030900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:36.030954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:36.030971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:36.035487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:36.035529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:36.035549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:36.039775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:36.039817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:36.039836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:36.044687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:36.044729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:36.044749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.575 [2024-12-05 10:02:36.048708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.575 [2024-12-05 10:02:36.048750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.575 [2024-12-05 10:02:36.048772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.052602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.052644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.052664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.056001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.056041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.056064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.060463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.060505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.060526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.065266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.065317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.065333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.068974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.069015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.069036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.073437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.073479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.073499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.077409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.077450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.077469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.081472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.081513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.081537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.085411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.085453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.085469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.089461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.089502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.089523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.093499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.093540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.093562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.097871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.097912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.097940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.101920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.101977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.101998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.105951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.105990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.106009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.109389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.109430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.109452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.113592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.113634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.113650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.117756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.117798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.117818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.121791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.121832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.121855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.126108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.126149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.126168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.130360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.130403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.130419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.134368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.134422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.134438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.138616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.138666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.138683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.142653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.142694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.142716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.147024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.147065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.147084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.151070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.151110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.151126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.155205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.155258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.576 [2024-12-05 10:02:36.155277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.576 [2024-12-05 10:02:36.159116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.576 [2024-12-05 10:02:36.159157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.577 [2024-12-05 10:02:36.159177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.577 [2024-12-05 10:02:36.163486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.577 [2024-12-05 10:02:36.163528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.577 [2024-12-05 10:02:36.163549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.577 [2024-12-05 10:02:36.166926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.577 [2024-12-05 10:02:36.166979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.577 [2024-12-05 10:02:36.166999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.577 [2024-12-05 10:02:36.171376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.577 [2024-12-05 10:02:36.171417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.577 [2024-12-05 10:02:36.171441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.577 [2024-12-05 10:02:36.176420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.577 [2024-12-05 10:02:36.176461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.577 [2024-12-05 10:02:36.176492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.577 [2024-12-05 10:02:36.179953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.577 [2024-12-05 10:02:36.179994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.577 [2024-12-05 10:02:36.180015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.577 [2024-12-05 10:02:36.184357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.577 [2024-12-05 10:02:36.184410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.577 [2024-12-05 10:02:36.184426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.577 [2024-12-05 10:02:36.189337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.577 [2024-12-05 10:02:36.189379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.577 [2024-12-05 10:02:36.189403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.577 [2024-12-05 10:02:36.193340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.577 [2024-12-05 10:02:36.193381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.577 [2024-12-05 10:02:36.193402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.577 [2024-12-05 10:02:36.197619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.577 [2024-12-05 10:02:36.197692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.577 [2024-12-05 10:02:36.197710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.201791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.201834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.201855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.206935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.206989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.207007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.210860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.210901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.210922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.215321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.215362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.215383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.219993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.220047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.220063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.224327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.224368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.224384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.228583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.228624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.228643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.232632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.232674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.232698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.236559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.236601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.236622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.240268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.240309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.240325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.244670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.244712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.244733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.249215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.249256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.249278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.253247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.253287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.253310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.257307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.257349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.257381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.260887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.260948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.260966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.265125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.265167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.265186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.269475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.269516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.269536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.273778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.273818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.273839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.277583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.277623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.277639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.281312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.838 [2024-12-05 10:02:36.281353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.838 [2024-12-05 10:02:36.281374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.838 [2024-12-05 10:02:36.285453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.285494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.285514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.288997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.289037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.289059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.293072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.293126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.293143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.297518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.297560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.297576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.302074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.302116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.302138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.306345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.306388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.306403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.310570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.310624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.310642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.314779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.314820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.314837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.319633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.319684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.319704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.324875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.324917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.324971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.329762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.329803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.329819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.334855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.334909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.334940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.340344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.340396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.340412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.344343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.344396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.344414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.349392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.349445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.349465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.353496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.353549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.353565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.358045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.358086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.358103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.362463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.362513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.362529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.368225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.368267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.368284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.371944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.371997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.372013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.377049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.377103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.377120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.380682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.380735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.380751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.385033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.385087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.385103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.389336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.389377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.389393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.392875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.392946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.392963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.397907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.397983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.398001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.402064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.839 [2024-12-05 10:02:36.402117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.839 [2024-12-05 10:02:36.402137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.839 [2024-12-05 10:02:36.406436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.840 [2024-12-05 10:02:36.406480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.840 [2024-12-05 10:02:36.406498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.840 [2024-12-05 10:02:36.410669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.840 [2024-12-05 10:02:36.410725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.840 [2024-12-05 10:02:36.410756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.840 [2024-12-05 10:02:36.414883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.840 [2024-12-05 10:02:36.414948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.840 [2024-12-05 10:02:36.414966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.840 [2024-12-05 10:02:36.419409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.840 [2024-12-05 10:02:36.419462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.840 [2024-12-05 10:02:36.419478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.840 [2024-12-05 10:02:36.423397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.840 [2024-12-05 10:02:36.423449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.840 [2024-12-05 10:02:36.423468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.840 [2024-12-05 10:02:36.427155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.840 [2024-12-05 10:02:36.427208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.840 [2024-12-05 10:02:36.427227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.840 [2024-12-05 10:02:36.431447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.840 [2024-12-05 10:02:36.431502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.840 [2024-12-05 10:02:36.431520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.840 [2024-12-05 10:02:36.435393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.840 [2024-12-05 10:02:36.435446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.840 [2024-12-05 10:02:36.435464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.840 [2024-12-05 10:02:36.439838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.840 [2024-12-05 10:02:36.439879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.840 [2024-12-05 10:02:36.439894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.840 [2024-12-05 10:02:36.444332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.840 [2024-12-05 10:02:36.444384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.840 [2024-12-05 10:02:36.444403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.840 [2024-12-05 10:02:36.448632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.840 [2024-12-05 10:02:36.448672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.840 [2024-12-05 10:02:36.448688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.840 [2024-12-05 10:02:36.452755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.840 [2024-12-05 10:02:36.452795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.840 [2024-12-05 10:02:36.452812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.840 [2024-12-05 10:02:36.457115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.840 [2024-12-05 10:02:36.457157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.840 [2024-12-05 10:02:36.457173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.840 [2024-12-05 10:02:36.461167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:48.840 [2024-12-05 10:02:36.461207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.840 [2024-12-05 10:02:36.461224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.100 [2024-12-05 10:02:36.466194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.100 [2024-12-05 10:02:36.466248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.100 [2024-12-05 10:02:36.466267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.100 [2024-12-05 10:02:36.470419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.100 [2024-12-05 10:02:36.470476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.100 [2024-12-05 10:02:36.470493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.100 [2024-12-05 10:02:36.475265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.100 [2024-12-05 10:02:36.475307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.100 [2024-12-05 10:02:36.475323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.100 [2024-12-05 10:02:36.479310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.100 [2024-12-05 10:02:36.479351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.100 [2024-12-05 10:02:36.479367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.100 [2024-12-05 10:02:36.483773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.100 [2024-12-05 10:02:36.483826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.100 [2024-12-05 10:02:36.483842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.100 [2024-12-05 10:02:36.488833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.100 [2024-12-05 10:02:36.488874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.100 [2024-12-05 10:02:36.488890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.100 [2024-12-05 10:02:36.492537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.100 [2024-12-05 10:02:36.492578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.492595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.497099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.497153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.497173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.501046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.501098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.501114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.505314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.505367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.505390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.509259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.509313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.509334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.513639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.513694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.513714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.517789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.517830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.517846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.521921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.521983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.522000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.526530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.526572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.526589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.531456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.531506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.531527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.534784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.534837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.534855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.539700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.539742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.539758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.544662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.544704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.544720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.548988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.549025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.549054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.553151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.553192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.553208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.557301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.557354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.557371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.562018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.562071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.562087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.565590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.565644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.565664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.569855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.569895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.569911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.573967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.574019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.574035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.577811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.577865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.577885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.583016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.583056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.583072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.586986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.587040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.587056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.591148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.591189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.591205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.595048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.595089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.595105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.599541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.599581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.599597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.603757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.603810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.603829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.608419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.608461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.101 [2024-12-05 10:02:36.608477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.101 [2024-12-05 10:02:36.612532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.101 [2024-12-05 10:02:36.612580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.612596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.616920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.616984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.617004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.621139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.621181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.621197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.625394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.625436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.625452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.630226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.630267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.630282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.634236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.634290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.634327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.638292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.638352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.638371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.642605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.642657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.642672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.647648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.647690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.647711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.651257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.651298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.651315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.655754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.655796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.655812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.659507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.659547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.659569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.664058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.664098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.664119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.667971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.668011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.668032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.672002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.672042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.672063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.676435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.676476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.676496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.680450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.680491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.680507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.684309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.684351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.684373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.688229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.688270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.688292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.692154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.692195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.692220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.696188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.696229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.696250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.700046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.700088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.700109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.704349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.704391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.704411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.708451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.708493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.708511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.712428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.712469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.712489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.716463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.716505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.716527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.720403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.720445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.720468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.102 [2024-12-05 10:02:36.724533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.102 [2024-12-05 10:02:36.724575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.102 [2024-12-05 10:02:36.724621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.362 [2024-12-05 10:02:36.728992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.362 [2024-12-05 10:02:36.729043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.362 [2024-12-05 10:02:36.729062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.362 [2024-12-05 10:02:36.733994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.362 [2024-12-05 10:02:36.734062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.362 [2024-12-05 10:02:36.734080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.362 [2024-12-05 10:02:36.737487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.362 [2024-12-05 10:02:36.737527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.362 [2024-12-05 10:02:36.737548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.362 [2024-12-05 10:02:36.741367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.362 [2024-12-05 10:02:36.741407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.362 [2024-12-05 10:02:36.741430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.362 [2024-12-05 10:02:36.745731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.362 [2024-12-05 10:02:36.745773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.362 [2024-12-05 10:02:36.745794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.362 [2024-12-05 10:02:36.749644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.362 [2024-12-05 10:02:36.749685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.362 [2024-12-05 10:02:36.749706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.362 [2024-12-05 10:02:36.753573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.362 [2024-12-05 10:02:36.753614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.362 [2024-12-05 10:02:36.753636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.362 [2024-12-05 10:02:36.757473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.362 [2024-12-05 10:02:36.757513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.757533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.762361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.762417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.762435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.766082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.766122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.766144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.770590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.770632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.770648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.774457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.774499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.774515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.779057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.779097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.779117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.783149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.783190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.783211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.786995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.787036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.787057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.790901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.790967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.790984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.794999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.795039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.795061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.799322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.799363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.799387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.803377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.803419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.803443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.807285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.807326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.807348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.810942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.810981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.811002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.815136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.815177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.815199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.819495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.819537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.819558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.824329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.824370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.824391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.827725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.827765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.827785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.832024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.832064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.832087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.835960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.835999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.836021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.840349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.840390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.840412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.845358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.845399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.845422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.848991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.849030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.849046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.853351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.853393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.853417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.858330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.858371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.858387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.861727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.861767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.861789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.866940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.866980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.867001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.871490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.871531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.871552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.874741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.874781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.874803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.879826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.879868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.879891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.884657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.884698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.884714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.888153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.888207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.888226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.893284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.893326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.893354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.897680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.897734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.897753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.901803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.901856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.901875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.906113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.363 [2024-12-05 10:02:36.906177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.363 [2024-12-05 10:02:36.906196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.363 [2024-12-05 10:02:36.910629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.364 [2024-12-05 10:02:36.910672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.364 [2024-12-05 10:02:36.910692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.364 [2024-12-05 10:02:36.914715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.364 [2024-12-05 10:02:36.914756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.364 [2024-12-05 10:02:36.914772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.364 [2024-12-05 10:02:36.919092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.364 [2024-12-05 10:02:36.919133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.364 [2024-12-05 10:02:36.919149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.364 [2024-12-05 10:02:36.922828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.364 [2024-12-05 10:02:36.922868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.364 [2024-12-05 10:02:36.922890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.364 [2024-12-05 10:02:36.926918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.364 [2024-12-05 10:02:36.926970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.364 [2024-12-05 10:02:36.926990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.364 [2024-12-05 10:02:36.930885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.364 [2024-12-05 10:02:36.930938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.364 [2024-12-05 10:02:36.930956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.364 [2024-12-05 10:02:36.935027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.364 [2024-12-05 10:02:36.935068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.364 [2024-12-05 10:02:36.935084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.364 7207.50 IOPS, 900.94 MiB/s [2024-12-05T10:02:36.993Z] [2024-12-05 10:02:36.941612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:30:49.364 [2024-12-05 10:02:36.941647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.364 [2024-12-05 10:02:36.941663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.364 00:30:49.364 Latency(us) 00:30:49.364 [2024-12-05T10:02:36.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:49.364 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:49.364 nvme0n1 : 2.00 7206.33 900.79 0.00 0.00 2215.95 815.48 11021.96 00:30:49.364 [2024-12-05T10:02:36.993Z] =================================================================================================================== 00:30:49.364 [2024-12-05T10:02:36.993Z] Total : 7206.33 900.79 0.00 0.00 2215.95 815.48 11021.96 00:30:49.364 { 00:30:49.364 "results": [ 00:30:49.364 { 00:30:49.364 "job": "nvme0n1", 00:30:49.364 "core_mask": "0x2", 00:30:49.364 "workload": "randread", 00:30:49.364 "status": "finished", 00:30:49.364 "queue_depth": 16, 00:30:49.364 "io_size": 131072, 00:30:49.364 "runtime": 2.002544, 00:30:49.364 "iops": 7206.333543732372, 00:30:49.364 "mibps": 900.7916929665465, 00:30:49.364 "io_failed": 0, 00:30:49.364 "io_timeout": 0, 00:30:49.364 "avg_latency_us": 2215.95248814106, 00:30:49.364 "min_latency_us": 815.4763636363637, 00:30:49.364 "max_latency_us": 11021.963636363636 00:30:49.364 } 00:30:49.364 ], 00:30:49.364 "core_count": 1 00:30:49.364 } 00:30:49.364 10:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:49.364 10:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:49.364 10:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:49.364 10:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:49.364 | .driver_specific 00:30:49.364 | .nvme_error 00:30:49.364 | .status_code 00:30:49.364 | .command_transient_transport_error' 00:30:49.622 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 466 > 0 )) 00:30:49.622 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 104946 00:30:49.622 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 104946 ']' 00:30:49.622 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 104946 00:30:49.622 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:49.622 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:49.622 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104946 00:30:49.622 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:49.622 killing process with pid 104946 00:30:49.622 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:49.622 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104946' 00:30:49.622 Received shutdown signal, test time was about 2.000000 seconds 00:30:49.622 00:30:49.622 Latency(us) 00:30:49.622 [2024-12-05T10:02:37.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:49.622 [2024-12-05T10:02:37.251Z] =================================================================================================================== 00:30:49.622 [2024-12-05T10:02:37.251Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:49.622 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 104946 00:30:49.622 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 104946 00:30:50.559 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:50.559 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:50.559 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:50.559 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:50.559 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:50.559 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=105043 00:30:50.559 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 105043 /var/tmp/bperf.sock 00:30:50.559 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:50.559 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 105043 ']' 00:30:50.559 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:50.559 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:50.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:50.559 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:50.559 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:50.559 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:50.818 [2024-12-05 10:02:38.263480] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:50.818 [2024-12-05 10:02:38.263625] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105043 ] 00:30:50.818 [2024-12-05 10:02:38.429790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.076 [2024-12-05 10:02:38.542475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.643 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:51.643 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:51.643 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:51.643 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:51.903 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:51.903 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.903 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:51.903 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.903 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:51.903 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:52.471 nvme0n1 00:30:52.471 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:52.471 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.471 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:52.471 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.471 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:52.471 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:52.471 Running I/O for 2 seconds... 00:30:52.471 [2024-12-05 10:02:39.991719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f20d8 00:30:52.471 [2024-12-05 10:02:39.993180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.471 [2024-12-05 10:02:39.993254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:52.471 [2024-12-05 10:02:40.007479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0a68 00:30:52.471 [2024-12-05 10:02:40.009653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.471 [2024-12-05 10:02:40.009697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:52.471 [2024-12-05 10:02:40.016692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4578 00:30:52.471 [2024-12-05 10:02:40.017893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.471 [2024-12-05 10:02:40.017966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:52.471 [2024-12-05 10:02:40.033156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:30:52.471 [2024-12-05 10:02:40.034847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.471 [2024-12-05 10:02:40.034898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:52.471 [2024-12-05 10:02:40.045308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e23b8 00:30:52.471 [2024-12-05 10:02:40.046689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.471 [2024-12-05 10:02:40.046729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:52.471 [2024-12-05 10:02:40.057058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dece0 00:30:52.471 [2024-12-05 10:02:40.058166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.471 [2024-12-05 10:02:40.058205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:52.471 [2024-12-05 10:02:40.069725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1ca0 00:30:52.471 [2024-12-05 10:02:40.071192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.471 [2024-12-05 10:02:40.071245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:52.471 [2024-12-05 10:02:40.084069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc998 00:30:52.471 [2024-12-05 10:02:40.086139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.471 [2024-12-05 10:02:40.086191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:52.471 [2024-12-05 10:02:40.092605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f81e0 00:30:52.471 [2024-12-05 10:02:40.093747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.471 [2024-12-05 10:02:40.093785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:52.731 [2024-12-05 10:02:40.109136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6fa8 00:30:52.731 [2024-12-05 10:02:40.111458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.731 [2024-12-05 10:02:40.111508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:52.731 [2024-12-05 10:02:40.120885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5ec8 00:30:52.731 [2024-12-05 10:02:40.122166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.731 [2024-12-05 10:02:40.122215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:52.731 [2024-12-05 10:02:40.136850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe2e8 00:30:52.731 [2024-12-05 10:02:40.138670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.731 [2024-12-05 10:02:40.138709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:52.731 [2024-12-05 10:02:40.147490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe720 00:30:52.731 [2024-12-05 10:02:40.148688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.731 [2024-12-05 10:02:40.148727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:52.731 [2024-12-05 10:02:40.159086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd208 00:30:52.731 [2024-12-05 10:02:40.160259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.731 [2024-12-05 10:02:40.160310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:52.731 [2024-12-05 10:02:40.171659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7c50 00:30:52.731 [2024-12-05 10:02:40.173145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.731 [2024-12-05 10:02:40.173194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:52.731 [2024-12-05 10:02:40.182740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5658 00:30:52.731 [2024-12-05 10:02:40.183724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.731 [2024-12-05 10:02:40.183774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:52.731 [2024-12-05 10:02:40.194438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ddc00 00:30:52.731 [2024-12-05 10:02:40.195634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.731 [2024-12-05 10:02:40.195684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:52.731 [2024-12-05 10:02:40.208694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7970 00:30:52.731 [2024-12-05 10:02:40.210593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.731 [2024-12-05 10:02:40.210632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:52.731 [2024-12-05 10:02:40.219343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f57b0 00:30:52.731 [2024-12-05 10:02:40.220585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.731 [2024-12-05 10:02:40.220624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:52.731 [2024-12-05 10:02:40.230893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eff18 00:30:52.731 [2024-12-05 10:02:40.232116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.731 [2024-12-05 10:02:40.232166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:52.731 [2024-12-05 10:02:40.243589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb048 00:30:52.731 [2024-12-05 10:02:40.245196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.731 [2024-12-05 10:02:40.245245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:52.731 [2024-12-05 10:02:40.254849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1ca0 00:30:52.731 [2024-12-05 10:02:40.255841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.732 [2024-12-05 10:02:40.255880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:52.732 [2024-12-05 10:02:40.266592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc560 00:30:52.732 [2024-12-05 10:02:40.267850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.732 [2024-12-05 10:02:40.267897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:52.732 [2024-12-05 10:02:40.282478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0a68 00:30:52.732 [2024-12-05 10:02:40.284606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.732 [2024-12-05 10:02:40.284656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:52.732 [2024-12-05 10:02:40.292708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3d08 00:30:52.732 [2024-12-05 10:02:40.293829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.732 [2024-12-05 10:02:40.293879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:52.732 [2024-12-05 10:02:40.310236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df988 00:30:52.732 [2024-12-05 10:02:40.312335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.732 [2024-12-05 10:02:40.312390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:52.732 [2024-12-05 10:02:40.319362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2d80 00:30:52.732 [2024-12-05 10:02:40.320358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.732 [2024-12-05 10:02:40.320410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:52.732 [2024-12-05 10:02:40.333880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e2c28 00:30:52.732 [2024-12-05 10:02:40.335256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.732 [2024-12-05 10:02:40.335307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:52.732 [2024-12-05 10:02:40.346214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:30:52.732 [2024-12-05 10:02:40.347560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.732 [2024-12-05 10:02:40.347611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:52.992 [2024-12-05 10:02:40.359693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df118 00:30:52.992 [2024-12-05 10:02:40.361404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.992 [2024-12-05 10:02:40.361454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:52.992 [2024-12-05 10:02:40.371487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:30:52.992 [2024-12-05 10:02:40.372609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.992 [2024-12-05 10:02:40.372656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:52.992 [2024-12-05 10:02:40.383494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f35f0 00:30:52.992 [2024-12-05 10:02:40.384521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.992 [2024-12-05 10:02:40.384572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:52.992 [2024-12-05 10:02:40.395372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:30:52.992 [2024-12-05 10:02:40.396434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.992 [2024-12-05 10:02:40.396484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:52.992 [2024-12-05 10:02:40.408608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd208 00:30:52.992 [2024-12-05 10:02:40.409978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.992 [2024-12-05 10:02:40.410026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:52.992 [2024-12-05 10:02:40.421474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f35f0 00:30:52.992 [2024-12-05 10:02:40.422638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.992 [2024-12-05 10:02:40.422701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:52.992 [2024-12-05 10:02:40.433638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e27f0 00:30:52.992 [2024-12-05 10:02:40.435069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.992 [2024-12-05 10:02:40.435108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:52.992 [2024-12-05 10:02:40.446553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de470 00:30:52.992 [2024-12-05 10:02:40.447682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.992 [2024-12-05 10:02:40.447732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:52.992 [2024-12-05 10:02:40.458574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173feb58 00:30:52.992 [2024-12-05 10:02:40.459696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.992 [2024-12-05 10:02:40.459747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:52.992 [2024-12-05 10:02:40.470739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de470 00:30:52.992 [2024-12-05 10:02:40.471767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.992 [2024-12-05 10:02:40.471818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:52.992 [2024-12-05 10:02:40.483721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4298 00:30:52.992 [2024-12-05 10:02:40.485107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.992 [2024-12-05 10:02:40.485149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:52.992 [2024-12-05 10:02:40.496207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0350 00:30:52.992 [2024-12-05 10:02:40.497564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.992 [2024-12-05 10:02:40.497615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:52.992 [2024-12-05 10:02:40.507827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7538 00:30:52.992 [2024-12-05 10:02:40.508869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.992 [2024-12-05 10:02:40.508921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:52.992 [2024-12-05 10:02:40.520870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:30:52.992 [2024-12-05 10:02:40.522315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.992 [2024-12-05 10:02:40.522363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:52.992 [2024-12-05 10:02:40.535945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e23b8 00:30:52.992 [2024-12-05 10:02:40.537977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.992 [2024-12-05 10:02:40.538027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.992 [2024-12-05 10:02:40.545746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4298 00:30:52.992 [2024-12-05 10:02:40.547191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.992 [2024-12-05 10:02:40.547232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:52.992 [2024-12-05 10:02:40.560685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:30:52.992 [2024-12-05 10:02:40.562820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.993 [2024-12-05 10:02:40.562871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:52.993 [2024-12-05 10:02:40.569608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0350 00:30:52.993 [2024-12-05 10:02:40.570692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.993 [2024-12-05 10:02:40.570743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:52.993 [2024-12-05 10:02:40.586237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed0b0 00:30:52.993 [2024-12-05 10:02:40.588477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.993 [2024-12-05 10:02:40.588528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:52.993 [2024-12-05 10:02:40.595407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173efae0 00:30:52.993 [2024-12-05 10:02:40.596508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.993 [2024-12-05 10:02:40.596556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:52.993 [2024-12-05 10:02:40.609332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4298 00:30:52.993 [2024-12-05 10:02:40.610702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.993 [2024-12-05 10:02:40.610741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.622794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de470 00:30:53.253 [2024-12-05 10:02:40.624686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.624726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.634572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb048 00:30:53.253 [2024-12-05 10:02:40.636300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.636339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.645918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de038 00:30:53.253 [2024-12-05 10:02:40.647519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.647557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.657502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df988 00:30:53.253 [2024-12-05 10:02:40.658947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.658986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.668868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:30:53.253 [2024-12-05 10:02:40.670121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.670172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.680371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ddc00 00:30:53.253 [2024-12-05 10:02:40.681487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.681526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.691924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:30:53.253 [2024-12-05 10:02:40.692852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.692901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.706024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fcdd0 00:30:53.253 [2024-12-05 10:02:40.707144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.707195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.717517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0ea0 00:30:53.253 [2024-12-05 10:02:40.718548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.718596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.729125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebfd0 00:30:53.253 [2024-12-05 10:02:40.729886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.729946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.742763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1430 00:30:53.253 [2024-12-05 10:02:40.744437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.744476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.754390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8618 00:30:53.253 [2024-12-05 10:02:40.755842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.755881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.765798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb760 00:30:53.253 [2024-12-05 10:02:40.767126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.767177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.777352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fdeb0 00:30:53.253 [2024-12-05 10:02:40.778517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.778569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.788707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ecc78 00:30:53.253 [2024-12-05 10:02:40.789687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.789725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.803981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f31b8 00:30:53.253 [2024-12-05 10:02:40.805900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.805948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.815581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe720 00:30:53.253 [2024-12-05 10:02:40.817505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.817556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.827209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fdeb0 00:30:53.253 [2024-12-05 10:02:40.828824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.828864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.838698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:30:53.253 [2024-12-05 10:02:40.840238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.840290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.850116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eea00 00:30:53.253 [2024-12-05 10:02:40.851516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.851556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.861721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6738 00:30:53.253 [2024-12-05 10:02:40.862913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.253 [2024-12-05 10:02:40.862969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:53.253 [2024-12-05 10:02:40.873177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9e10 00:30:53.254 [2024-12-05 10:02:40.874200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.254 [2024-12-05 10:02:40.874237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:40.884641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6020 00:30:53.513 [2024-12-05 10:02:40.885568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:40.885606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:40.898730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7818 00:30:53.513 [2024-12-05 10:02:40.899840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:40.899880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:40.910126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:30:53.513 [2024-12-05 10:02:40.910982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:40.911034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:40.921597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:30:53.513 [2024-12-05 10:02:40.922427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:40.922478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:40.935472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de8a8 00:30:53.513 [2024-12-05 10:02:40.937014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:40.937072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:40.946810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4298 00:30:53.513 [2024-12-05 10:02:40.948182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:40.948232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:40.958330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7538 00:30:53.513 [2024-12-05 10:02:40.959564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:40.959603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:40.969711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe2e8 00:30:53.513 [2024-12-05 10:02:40.970788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:40.970839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:53.513 20403.00 IOPS, 79.70 MiB/s [2024-12-05T10:02:41.142Z] [2024-12-05 10:02:40.982584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd208 00:30:53.513 [2024-12-05 10:02:40.984013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:40.984051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:40.996957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa7d8 00:30:53.513 [2024-12-05 10:02:40.998980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:40.999025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:41.007366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f57b0 00:30:53.513 [2024-12-05 10:02:41.008516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:41.008555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:41.018796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:30:53.513 [2024-12-05 10:02:41.019674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:41.019713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:41.030372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4298 00:30:53.513 [2024-12-05 10:02:41.031201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:41.031252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:41.044052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173feb58 00:30:53.513 [2024-12-05 10:02:41.045661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:41.045698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:41.055499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc560 00:30:53.513 [2024-12-05 10:02:41.056884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:41.056922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:41.067998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef6a8 00:30:53.513 [2024-12-05 10:02:41.069729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:41.069768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:41.079197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2d80 00:30:53.513 [2024-12-05 10:02:41.080507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:41.080545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:41.090916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb328 00:30:53.513 [2024-12-05 10:02:41.092340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:41.092378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:41.105167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaab8 00:30:53.513 [2024-12-05 10:02:41.107244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:41.107296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:41.115701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5ec8 00:30:53.513 [2024-12-05 10:02:41.117400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:41.117449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:53.513 [2024-12-05 10:02:41.129773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ecc78 00:30:53.513 [2024-12-05 10:02:41.131900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.513 [2024-12-05 10:02:41.131958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:53.773 [2024-12-05 10:02:41.144352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:30:53.773 [2024-12-05 10:02:41.146382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.773 [2024-12-05 10:02:41.146436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:53.773 [2024-12-05 10:02:41.155285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6890 00:30:53.773 [2024-12-05 10:02:41.156556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.773 [2024-12-05 10:02:41.156594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:53.773 [2024-12-05 10:02:41.171487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ecc78 00:30:53.773 [2024-12-05 10:02:41.173348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.773 [2024-12-05 10:02:41.173386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:53.773 [2024-12-05 10:02:41.184292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:30:53.773 [2024-12-05 10:02:41.186448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.773 [2024-12-05 10:02:41.186497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:53.773 [2024-12-05 10:02:41.194982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f96f8 00:30:53.773 [2024-12-05 10:02:41.196449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.773 [2024-12-05 10:02:41.196488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:53.773 [2024-12-05 10:02:41.206426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc560 00:30:53.773 [2024-12-05 10:02:41.207870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.773 [2024-12-05 10:02:41.207921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:53.773 [2024-12-05 10:02:41.219121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f31b8 00:30:53.773 [2024-12-05 10:02:41.220860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.773 [2024-12-05 10:02:41.220899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:53.773 [2024-12-05 10:02:41.230283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaab8 00:30:53.773 [2024-12-05 10:02:41.231755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.773 [2024-12-05 10:02:41.231793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:53.773 [2024-12-05 10:02:41.242108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef6a8 00:30:53.773 [2024-12-05 10:02:41.243403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.773 [2024-12-05 10:02:41.243442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:53.773 [2024-12-05 10:02:41.253903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed0b0 00:30:53.773 [2024-12-05 10:02:41.255384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.773 [2024-12-05 10:02:41.255423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:53.773 [2024-12-05 10:02:41.266681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed0b0 00:30:53.773 [2024-12-05 10:02:41.268159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.773 [2024-12-05 10:02:41.268209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:53.773 [2024-12-05 10:02:41.278214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0788 00:30:53.773 [2024-12-05 10:02:41.279575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.773 [2024-12-05 10:02:41.279614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:53.773 [2024-12-05 10:02:41.289719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5658 00:30:53.773 [2024-12-05 10:02:41.290892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.773 [2024-12-05 10:02:41.290944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:53.773 [2024-12-05 10:02:41.301319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eee38 00:30:53.773 [2024-12-05 10:02:41.302354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.773 [2024-12-05 10:02:41.302407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:53.773 [2024-12-05 10:02:41.313755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f5be8 00:30:53.773 [2024-12-05 10:02:41.315145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.773 [2024-12-05 10:02:41.315194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:53.773 [2024-12-05 10:02:41.328145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3498 00:30:53.773 [2024-12-05 10:02:41.330108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.773 [2024-12-05 10:02:41.330147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:53.773 [2024-12-05 10:02:41.338839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df550 00:30:53.773 [2024-12-05 10:02:41.340292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.773 [2024-12-05 10:02:41.340342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:53.773 [2024-12-05 10:02:41.351525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:30:53.773 [2024-12-05 10:02:41.353231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.773 [2024-12-05 10:02:41.353270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:53.773 [2024-12-05 10:02:41.362656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:30:53.774 [2024-12-05 10:02:41.363942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.774 [2024-12-05 10:02:41.363987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:53.774 [2024-12-05 10:02:41.374561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:30:53.774 [2024-12-05 10:02:41.375998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.774 [2024-12-05 10:02:41.376047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:53.774 [2024-12-05 10:02:41.386705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc128 00:30:53.774 [2024-12-05 10:02:41.388170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.774 [2024-12-05 10:02:41.388221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:53.774 [2024-12-05 10:02:41.398235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:30:53.774 [2024-12-05 10:02:41.399524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.774 [2024-12-05 10:02:41.399563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:54.033 [2024-12-05 10:02:41.409693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f31b8 00:30:54.033 [2024-12-05 10:02:41.410817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.033 [2024-12-05 10:02:41.410857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.033 [2024-12-05 10:02:41.421266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173efae0 00:30:54.033 [2024-12-05 10:02:41.422336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.033 [2024-12-05 10:02:41.422406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:54.033 [2024-12-05 10:02:41.435517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e12d8 00:30:54.033 [2024-12-05 10:02:41.436689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.033 [2024-12-05 10:02:41.436729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:54.033 [2024-12-05 10:02:41.447032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:30:54.033 [2024-12-05 10:02:41.447973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.033 [2024-12-05 10:02:41.448015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:54.033 [2024-12-05 10:02:41.458400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5658 00:30:54.033 [2024-12-05 10:02:41.459260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.033 [2024-12-05 10:02:41.459311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:54.033 [2024-12-05 10:02:41.472125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6300 00:30:54.033 [2024-12-05 10:02:41.473729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.033 [2024-12-05 10:02:41.473767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:54.033 [2024-12-05 10:02:41.483694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df118 00:30:54.033 [2024-12-05 10:02:41.485151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.033 [2024-12-05 10:02:41.485200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:54.033 [2024-12-05 10:02:41.495283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f96f8 00:30:54.033 [2024-12-05 10:02:41.496593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.033 [2024-12-05 10:02:41.496631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:54.033 [2024-12-05 10:02:41.506784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e99d8 00:30:54.034 [2024-12-05 10:02:41.507946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.034 [2024-12-05 10:02:41.507991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:54.034 [2024-12-05 10:02:41.518233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1b48 00:30:54.034 [2024-12-05 10:02:41.519266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.034 [2024-12-05 10:02:41.519304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:54.034 [2024-12-05 10:02:41.533305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eee38 00:30:54.034 [2024-12-05 10:02:41.535272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.034 [2024-12-05 10:02:41.535322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:54.034 [2024-12-05 10:02:41.544760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f35f0 00:30:54.034 [2024-12-05 10:02:41.546633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.034 [2024-12-05 10:02:41.546671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:54.034 [2024-12-05 10:02:41.556345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6890 00:30:54.034 [2024-12-05 10:02:41.558000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.034 [2024-12-05 10:02:41.558039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:54.034 [2024-12-05 10:02:41.567743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0788 00:30:54.034 [2024-12-05 10:02:41.569279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.034 [2024-12-05 10:02:41.569328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:54.034 [2024-12-05 10:02:41.579220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1ca0 00:30:54.034 [2024-12-05 10:02:41.580796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.034 [2024-12-05 10:02:41.580845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:54.034 [2024-12-05 10:02:41.592193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef6a8 00:30:54.034 [2024-12-05 10:02:41.593513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.034 [2024-12-05 10:02:41.593561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:54.034 [2024-12-05 10:02:41.605659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3a28 00:30:54.034 [2024-12-05 10:02:41.606883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.034 [2024-12-05 10:02:41.606945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:54.034 [2024-12-05 10:02:41.619445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6cc8 00:30:54.034 [2024-12-05 10:02:41.620585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.034 [2024-12-05 10:02:41.620636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:54.034 [2024-12-05 10:02:41.633727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dfdc0 00:30:54.034 [2024-12-05 10:02:41.635214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.034 [2024-12-05 10:02:41.635264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:54.034 [2024-12-05 10:02:41.647390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6fa8 00:30:54.034 [2024-12-05 10:02:41.649126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.034 [2024-12-05 10:02:41.649178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:54.034 [2024-12-05 10:02:41.658872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8d30 00:30:54.034 [2024-12-05 10:02:41.660410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.034 [2024-12-05 10:02:41.660457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:54.294 [2024-12-05 10:02:41.671550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa3a0 00:30:54.294 [2024-12-05 10:02:41.673004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.294 [2024-12-05 10:02:41.673053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:54.294 [2024-12-05 10:02:41.685568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1868 00:30:54.294 [2024-12-05 10:02:41.687403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.294 [2024-12-05 10:02:41.687454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:54.294 [2024-12-05 10:02:41.699005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:30:54.294 [2024-12-05 10:02:41.701162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.294 [2024-12-05 10:02:41.701201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:54.294 [2024-12-05 10:02:41.707889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:30:54.294 [2024-12-05 10:02:41.709039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.294 [2024-12-05 10:02:41.709090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:54.294 [2024-12-05 10:02:41.722673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7538 00:30:54.294 [2024-12-05 10:02:41.724524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.294 [2024-12-05 10:02:41.724574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:54.294 [2024-12-05 10:02:41.734396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f5be8 00:30:54.294 [2024-12-05 10:02:41.735889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.294 [2024-12-05 10:02:41.735948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:54.294 [2024-12-05 10:02:41.747711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2510 00:30:54.294 [2024-12-05 10:02:41.749718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.294 [2024-12-05 10:02:41.749768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:54.294 [2024-12-05 10:02:41.760546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fef90 00:30:54.294 [2024-12-05 10:02:41.762435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.294 [2024-12-05 10:02:41.762502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:54.294 [2024-12-05 10:02:41.772037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e49b0 00:30:54.294 [2024-12-05 10:02:41.773231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.294 [2024-12-05 10:02:41.773269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:54.294 [2024-12-05 10:02:41.783652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eea00 00:30:54.294 [2024-12-05 10:02:41.784856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.294 [2024-12-05 10:02:41.784907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:54.294 [2024-12-05 10:02:41.795912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f35f0 00:30:54.294 [2024-12-05 10:02:41.797097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.294 [2024-12-05 10:02:41.797147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:54.294 [2024-12-05 10:02:41.809153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f5be8 00:30:54.294 [2024-12-05 10:02:41.810713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.294 [2024-12-05 10:02:41.810765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:54.294 [2024-12-05 10:02:41.824071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f96f8 00:30:54.294 [2024-12-05 10:02:41.826264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.294 [2024-12-05 10:02:41.826322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:54.294 [2024-12-05 10:02:41.833243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9168 00:30:54.294 [2024-12-05 10:02:41.834512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.294 [2024-12-05 10:02:41.834551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:54.294 [2024-12-05 10:02:41.845689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e27f0 00:30:54.294 [2024-12-05 10:02:41.846890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.294 [2024-12-05 10:02:41.846952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:54.294 [2024-12-05 10:02:41.860765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173feb58 00:30:54.294 [2024-12-05 10:02:41.862730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.294 [2024-12-05 10:02:41.862782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:54.294 [2024-12-05 10:02:41.869882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ea680 00:30:54.294 [2024-12-05 10:02:41.870835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.294 [2024-12-05 10:02:41.870885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:54.294 [2024-12-05 10:02:41.885129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5220 00:30:54.295 [2024-12-05 10:02:41.886830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.295 [2024-12-05 10:02:41.886883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:54.295 [2024-12-05 10:02:41.896901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df118 00:30:54.295 [2024-12-05 10:02:41.897883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.295 [2024-12-05 10:02:41.897952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:54.295 [2024-12-05 10:02:41.907997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eea00 00:30:54.295 [2024-12-05 10:02:41.908887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.295 [2024-12-05 10:02:41.908947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:54.295 [2024-12-05 10:02:41.920674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e49b0 00:30:54.554 [2024-12-05 10:02:41.921901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.554 [2024-12-05 10:02:41.921977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:54.554 [2024-12-05 10:02:41.935207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:30:54.554 [2024-12-05 10:02:41.937079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.554 [2024-12-05 10:02:41.937131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:54.554 [2024-12-05 10:02:41.943813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5658 00:30:54.554 [2024-12-05 10:02:41.944739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.555 [2024-12-05 10:02:41.944790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:54.555 [2024-12-05 10:02:41.958206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8d30 00:30:54.555 [2024-12-05 10:02:41.959650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.555 [2024-12-05 10:02:41.959689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:54.555 [2024-12-05 10:02:41.969631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc998 00:30:54.555 [2024-12-05 10:02:41.970906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.555 [2024-12-05 10:02:41.970964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:54.555 20539.50 IOPS, 80.23 MiB/s 00:30:54.555 Latency(us) 00:30:54.555 [2024-12-05T10:02:42.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.555 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:54.555 nvme0n1 : 2.00 20552.61 80.28 0.00 0.00 6220.86 3008.70 17396.83 00:30:54.555 [2024-12-05T10:02:42.184Z] =================================================================================================================== 00:30:54.555 [2024-12-05T10:02:42.184Z] Total : 20552.61 80.28 0.00 0.00 6220.86 3008.70 17396.83 00:30:54.555 { 00:30:54.555 "results": [ 00:30:54.555 { 00:30:54.555 "job": "nvme0n1", 00:30:54.555 "core_mask": "0x2", 00:30:54.555 "workload": "randwrite", 00:30:54.555 "status": "finished", 00:30:54.555 "queue_depth": 128, 00:30:54.555 "io_size": 4096, 00:30:54.555 "runtime": 2.004952, 00:30:54.555 "iops": 20552.61173334823, 00:30:54.555 "mibps": 80.28363958339152, 00:30:54.555 "io_failed": 0, 00:30:54.555 "io_timeout": 0, 00:30:54.555 "avg_latency_us": 6220.857618454058, 00:30:54.555 "min_latency_us": 3008.6981818181816, 00:30:54.555 "max_latency_us": 17396.82909090909 00:30:54.555 } 00:30:54.555 ], 00:30:54.555 "core_count": 1 00:30:54.555 } 00:30:54.555 10:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:54.555 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:54.555 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:54.555 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:54.555 | .driver_specific 00:30:54.555 | .nvme_error 00:30:54.555 | .status_code 00:30:54.555 | .command_transient_transport_error' 00:30:54.814 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 161 > 0 )) 00:30:54.814 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 105043 00:30:54.814 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 105043 ']' 00:30:54.814 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 105043 00:30:54.814 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:54.814 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:54.814 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105043 00:30:54.814 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:54.814 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:54.814 killing process with pid 105043 00:30:54.814 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105043' 00:30:54.814 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 105043 00:30:54.814 Received shutdown signal, test time was about 2.000000 seconds 00:30:54.814 00:30:54.814 Latency(us) 00:30:54.814 [2024-12-05T10:02:42.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.814 [2024-12-05T10:02:42.443Z] =================================================================================================================== 00:30:54.814 [2024-12-05T10:02:42.443Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:54.814 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 105043 00:30:55.752 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:55.752 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:55.752 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:55.752 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:55.752 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:55.752 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=105133 00:30:55.752 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:55.752 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 105133 /var/tmp/bperf.sock 00:30:55.752 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 105133 ']' 00:30:55.752 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:55.752 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:55.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:55.752 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:55.752 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:55.752 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:55.752 [2024-12-05 10:02:43.237322] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:55.752 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:55.752 Zero copy mechanism will not be used. 00:30:55.752 [2024-12-05 10:02:43.237514] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105133 ] 00:30:56.011 [2024-12-05 10:02:43.418950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.011 [2024-12-05 10:02:43.528758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.581 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:56.581 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:56.581 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:56.581 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:56.840 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:56.840 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.840 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:56.840 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.840 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:56.840 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:57.409 nvme0n1 00:30:57.409 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:57.409 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.409 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:57.409 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.409 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:57.409 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:57.409 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:57.409 Zero copy mechanism will not be used. 00:30:57.409 Running I/O for 2 seconds... 00:30:57.409 [2024-12-05 10:02:44.947827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.409 [2024-12-05 10:02:44.948053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.409 [2024-12-05 10:02:44.948104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.409 [2024-12-05 10:02:44.955515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.409 [2024-12-05 10:02:44.955681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.409 [2024-12-05 10:02:44.955710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.409 [2024-12-05 10:02:44.962710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.409 [2024-12-05 10:02:44.962841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.409 [2024-12-05 10:02:44.962869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.409 [2024-12-05 10:02:44.969584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.409 [2024-12-05 10:02:44.969691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.409 [2024-12-05 10:02:44.969719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.409 [2024-12-05 10:02:44.976579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.409 [2024-12-05 10:02:44.976762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.409 [2024-12-05 10:02:44.976791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.409 [2024-12-05 10:02:44.983647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.409 [2024-12-05 10:02:44.983807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.409 [2024-12-05 10:02:44.983835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.409 [2024-12-05 10:02:44.990692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.409 [2024-12-05 10:02:44.990878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.409 [2024-12-05 10:02:44.990905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.409 [2024-12-05 10:02:44.997485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.409 [2024-12-05 10:02:44.997637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.409 [2024-12-05 10:02:44.997665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.409 [2024-12-05 10:02:45.004428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.409 [2024-12-05 10:02:45.004597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.409 [2024-12-05 10:02:45.004624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.409 [2024-12-05 10:02:45.011373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.409 [2024-12-05 10:02:45.011514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.409 [2024-12-05 10:02:45.011542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.409 [2024-12-05 10:02:45.018352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.409 [2024-12-05 10:02:45.018488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.409 [2024-12-05 10:02:45.018516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.409 [2024-12-05 10:02:45.025170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.409 [2024-12-05 10:02:45.025302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.409 [2024-12-05 10:02:45.025330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.409 [2024-12-05 10:02:45.031969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.409 [2024-12-05 10:02:45.032143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.409 [2024-12-05 10:02:45.032170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.668 [2024-12-05 10:02:45.039277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.668 [2024-12-05 10:02:45.039432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.668 [2024-12-05 10:02:45.039476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.668 [2024-12-05 10:02:45.046234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.668 [2024-12-05 10:02:45.046409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.668 [2024-12-05 10:02:45.046439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.668 [2024-12-05 10:02:45.053130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.668 [2024-12-05 10:02:45.053262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.668 [2024-12-05 10:02:45.053289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.668 [2024-12-05 10:02:45.060041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.668 [2024-12-05 10:02:45.060176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.668 [2024-12-05 10:02:45.060203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.668 [2024-12-05 10:02:45.067006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.668 [2024-12-05 10:02:45.067203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.668 [2024-12-05 10:02:45.067230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.668 [2024-12-05 10:02:45.073795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.668 [2024-12-05 10:02:45.073968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.668 [2024-12-05 10:02:45.073995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.668 [2024-12-05 10:02:45.080685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.668 [2024-12-05 10:02:45.080835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.668 [2024-12-05 10:02:45.080863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.668 [2024-12-05 10:02:45.087636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.668 [2024-12-05 10:02:45.087787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.668 [2024-12-05 10:02:45.087815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.668 [2024-12-05 10:02:45.094663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.668 [2024-12-05 10:02:45.094814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.668 [2024-12-05 10:02:45.094841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.668 [2024-12-05 10:02:45.101542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.668 [2024-12-05 10:02:45.101677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.668 [2024-12-05 10:02:45.101705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.668 [2024-12-05 10:02:45.108320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.668 [2024-12-05 10:02:45.108475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.668 [2024-12-05 10:02:45.108502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.668 [2024-12-05 10:02:45.115311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.668 [2024-12-05 10:02:45.115485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.668 [2024-12-05 10:02:45.115512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.668 [2024-12-05 10:02:45.122175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.668 [2024-12-05 10:02:45.122334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.668 [2024-12-05 10:02:45.122362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.668 [2024-12-05 10:02:45.129036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.668 [2024-12-05 10:02:45.129185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.129213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.136054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.136190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.136217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.142784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.142971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.142998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.149548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.149685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.149713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.156383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.156560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.156587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.163173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.163316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.163343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.170066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.170251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.170278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.176736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.176923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.176976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.183621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.183754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.183780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.190346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.190493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.190521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.197098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.197252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.197279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.204058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.204241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.204268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.211216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.211415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.211442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.218822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.219004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.219032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.226714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.226882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.226909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.234702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.234833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.234861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.242185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.242367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.242395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.249425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.249577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.249605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.256553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.256717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.256743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.263428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.263563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.263590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.270253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.270451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.270479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.277166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.277315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.277342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.284087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.284217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.284244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.669 [2024-12-05 10:02:45.290887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.669 [2024-12-05 10:02:45.291090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.669 [2024-12-05 10:02:45.291117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.927 [2024-12-05 10:02:45.297871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.927 [2024-12-05 10:02:45.298012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.927 [2024-12-05 10:02:45.298040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.927 [2024-12-05 10:02:45.304710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.927 [2024-12-05 10:02:45.304842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.927 [2024-12-05 10:02:45.304870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.927 [2024-12-05 10:02:45.311429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.927 [2024-12-05 10:02:45.311565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.927 [2024-12-05 10:02:45.311593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.927 [2024-12-05 10:02:45.318229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.927 [2024-12-05 10:02:45.318400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.927 [2024-12-05 10:02:45.318428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.927 [2024-12-05 10:02:45.325049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.927 [2024-12-05 10:02:45.325186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.927 [2024-12-05 10:02:45.325214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.927 [2024-12-05 10:02:45.331810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.927 [2024-12-05 10:02:45.331958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.927 [2024-12-05 10:02:45.331985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.927 [2024-12-05 10:02:45.338655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.927 [2024-12-05 10:02:45.338785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.927 [2024-12-05 10:02:45.338812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.927 [2024-12-05 10:02:45.345498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.927 [2024-12-05 10:02:45.345636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.927 [2024-12-05 10:02:45.345663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.927 [2024-12-05 10:02:45.352498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.927 [2024-12-05 10:02:45.352703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.927 [2024-12-05 10:02:45.352731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.927 [2024-12-05 10:02:45.359265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.927 [2024-12-05 10:02:45.359448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.927 [2024-12-05 10:02:45.359475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.927 [2024-12-05 10:02:45.366000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.927 [2024-12-05 10:02:45.366181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.927 [2024-12-05 10:02:45.366208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.927 [2024-12-05 10:02:45.372773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.927 [2024-12-05 10:02:45.372907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.927 [2024-12-05 10:02:45.372957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.379549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.928 [2024-12-05 10:02:45.379715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.928 [2024-12-05 10:02:45.379743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.386405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.928 [2024-12-05 10:02:45.386537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.928 [2024-12-05 10:02:45.386564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.393146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.928 [2024-12-05 10:02:45.393329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.928 [2024-12-05 10:02:45.393356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.399982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.928 [2024-12-05 10:02:45.400133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.928 [2024-12-05 10:02:45.400162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.407026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.928 [2024-12-05 10:02:45.407173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.928 [2024-12-05 10:02:45.407201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.413777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.928 [2024-12-05 10:02:45.413954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.928 [2024-12-05 10:02:45.413981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.420714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.928 [2024-12-05 10:02:45.420887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.928 [2024-12-05 10:02:45.420914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.427460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.928 [2024-12-05 10:02:45.427595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.928 [2024-12-05 10:02:45.427623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.434285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.928 [2024-12-05 10:02:45.434442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.928 [2024-12-05 10:02:45.434468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.440867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.928 [2024-12-05 10:02:45.441066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.928 [2024-12-05 10:02:45.441094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.447640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.928 [2024-12-05 10:02:45.447820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.928 [2024-12-05 10:02:45.447847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.454378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.928 [2024-12-05 10:02:45.454515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.928 [2024-12-05 10:02:45.454542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.461147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.928 [2024-12-05 10:02:45.461277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.928 [2024-12-05 10:02:45.461304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.467925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.928 [2024-12-05 10:02:45.468077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.928 [2024-12-05 10:02:45.468105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.474765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.928 [2024-12-05 10:02:45.474973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.928 [2024-12-05 10:02:45.475001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.481592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.928 [2024-12-05 10:02:45.481767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.928 [2024-12-05 10:02:45.481794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.488513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.928 [2024-12-05 10:02:45.488715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.928 [2024-12-05 10:02:45.488741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.495349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.928 [2024-12-05 10:02:45.495487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.928 [2024-12-05 10:02:45.495514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.502099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.928 [2024-12-05 10:02:45.502281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.928 [2024-12-05 10:02:45.502316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.928 [2024-12-05 10:02:45.508817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.929 [2024-12-05 10:02:45.508980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.929 [2024-12-05 10:02:45.509007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.929 [2024-12-05 10:02:45.515773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.929 [2024-12-05 10:02:45.515951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.929 [2024-12-05 10:02:45.515979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.929 [2024-12-05 10:02:45.522719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.929 [2024-12-05 10:02:45.522870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.929 [2024-12-05 10:02:45.522897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:57.929 [2024-12-05 10:02:45.529619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.929 [2024-12-05 10:02:45.529748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.929 [2024-12-05 10:02:45.529775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:57.929 [2024-12-05 10:02:45.536446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.929 [2024-12-05 10:02:45.536596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.929 [2024-12-05 10:02:45.536624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:57.929 [2024-12-05 10:02:45.543397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.929 [2024-12-05 10:02:45.543537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.929 [2024-12-05 10:02:45.543565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:57.929 [2024-12-05 10:02:45.550192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:57.929 [2024-12-05 10:02:45.550375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.929 [2024-12-05 10:02:45.550402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.557185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.557321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.557348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.564315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.564500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.564527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.571543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.571711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.571738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.579199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.579349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.579376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.586428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.586567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.586603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.594185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.594373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.594400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.602011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.602213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.602240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.609621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.609751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.609779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.617043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.617191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.617219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.624378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.624548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.624575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.631535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.631714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.631740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.638813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.638983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.639011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.645956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.646146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.646174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.653015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.653165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.653192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.660160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.660334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.660361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.667261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.667472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.667499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.674296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.674490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.674518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.681332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.681479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.681505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.688375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.688545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.688572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.695380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.695547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.695575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.702719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.702889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.702916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.709906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.710059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.710086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.717046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.717210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.717237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.724121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.724281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.724308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.731338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.731506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.731533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.738449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.187 [2024-12-05 10:02:45.738658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.187 [2024-12-05 10:02:45.738685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.187 [2024-12-05 10:02:45.745606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.188 [2024-12-05 10:02:45.745777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.188 [2024-12-05 10:02:45.745805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.188 [2024-12-05 10:02:45.752793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.188 [2024-12-05 10:02:45.752979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.188 [2024-12-05 10:02:45.753006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.188 [2024-12-05 10:02:45.759960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.188 [2024-12-05 10:02:45.760108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.188 [2024-12-05 10:02:45.760134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.188 [2024-12-05 10:02:45.767134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.188 [2024-12-05 10:02:45.767327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.188 [2024-12-05 10:02:45.767354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.188 [2024-12-05 10:02:45.774112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.188 [2024-12-05 10:02:45.774342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.188 [2024-12-05 10:02:45.774369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.188 [2024-12-05 10:02:45.781104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.188 [2024-12-05 10:02:45.781263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.188 [2024-12-05 10:02:45.781290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.188 [2024-12-05 10:02:45.788281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.188 [2024-12-05 10:02:45.788447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.188 [2024-12-05 10:02:45.788475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.188 [2024-12-05 10:02:45.795455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.188 [2024-12-05 10:02:45.795636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.188 [2024-12-05 10:02:45.795663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.188 [2024-12-05 10:02:45.802681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.188 [2024-12-05 10:02:45.802848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.188 [2024-12-05 10:02:45.802875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.188 [2024-12-05 10:02:45.809784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.188 [2024-12-05 10:02:45.809979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.188 [2024-12-05 10:02:45.810007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.817361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.817529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.817556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.824497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.824699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.824726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.831651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.831822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.831849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.838832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.839010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.839037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.846136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.846309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.846344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.853201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.853384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.853411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.860262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.860411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.860438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.867384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.867553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.867581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.874640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.874826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.874854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.881739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.881884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.881911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.888807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.888999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.889026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.895813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.895976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.896003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.903286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.903487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.903514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.910334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.910503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.910530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.917311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.917465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.917492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.924144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.924286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.924313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.930863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.931044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.931071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.937838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.938042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.938069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.944645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.944777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.944805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.447 4410.00 IOPS, 551.25 MiB/s [2024-12-05T10:02:46.076Z] [2024-12-05 10:02:45.951982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.952108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.952134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.447 [2024-12-05 10:02:45.958840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.447 [2024-12-05 10:02:45.959008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-12-05 10:02:45.959036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.448 [2024-12-05 10:02:45.965962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.448 [2024-12-05 10:02:45.966133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.448 [2024-12-05 10:02:45.966159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.448 [2024-12-05 10:02:45.972856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.448 [2024-12-05 10:02:45.973038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.448 [2024-12-05 10:02:45.973065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.448 [2024-12-05 10:02:45.979753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.448 [2024-12-05 10:02:45.979946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.448 [2024-12-05 10:02:45.979973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.448 [2024-12-05 10:02:45.986492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.448 [2024-12-05 10:02:45.986686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.448 [2024-12-05 10:02:45.986714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.448 [2024-12-05 10:02:45.993346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.448 [2024-12-05 10:02:45.993483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.448 [2024-12-05 10:02:45.993509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.448 [2024-12-05 10:02:46.000143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.448 [2024-12-05 10:02:46.000294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.448 [2024-12-05 10:02:46.000321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.448 [2024-12-05 10:02:46.006919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.448 [2024-12-05 10:02:46.007096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.448 [2024-12-05 10:02:46.007123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.448 [2024-12-05 10:02:46.013758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.448 [2024-12-05 10:02:46.013952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.448 [2024-12-05 10:02:46.013979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.448 [2024-12-05 10:02:46.020556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.448 [2024-12-05 10:02:46.020688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.448 [2024-12-05 10:02:46.020715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.448 [2024-12-05 10:02:46.027280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.448 [2024-12-05 10:02:46.027443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.448 [2024-12-05 10:02:46.027470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.448 [2024-12-05 10:02:46.034230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.448 [2024-12-05 10:02:46.034406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.448 [2024-12-05 10:02:46.034433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.448 [2024-12-05 10:02:46.041140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.448 [2024-12-05 10:02:46.041305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.448 [2024-12-05 10:02:46.041332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.448 [2024-12-05 10:02:46.048049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.448 [2024-12-05 10:02:46.048182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.448 [2024-12-05 10:02:46.048209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.448 [2024-12-05 10:02:46.054781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.448 [2024-12-05 10:02:46.054911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.448 [2024-12-05 10:02:46.054961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.448 [2024-12-05 10:02:46.061597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.448 [2024-12-05 10:02:46.061758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.448 [2024-12-05 10:02:46.061786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.448 [2024-12-05 10:02:46.068489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.448 [2024-12-05 10:02:46.068688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.448 [2024-12-05 10:02:46.068715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.706 [2024-12-05 10:02:46.075621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.706 [2024-12-05 10:02:46.075807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.075833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.082617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.082826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.082864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.089492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.089620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.089648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.096301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.096455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.096482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.103171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.103304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.103332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.109877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.110038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.110065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.116689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.116883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.116910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.123496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.123635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.123662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.130224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.130422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.130450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.137110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.137294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.137322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.144005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.144202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.144229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.150936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.151084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.151111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.157815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.157986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.158014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.164703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.164837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.164864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.171491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.171621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.171649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.178227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.178386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.178414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.185228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.185413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.185440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.192100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.192290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.192317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.198923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.199078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.199106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.205816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.205969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.205996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.212581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.212737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.212764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.219540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.219675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.219702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.226255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.226414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.226442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.233193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.233347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.233374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.240514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.240679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.240707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.248309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.248508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.248536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.256421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.707 [2024-12-05 10:02:46.256542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.707 [2024-12-05 10:02:46.256570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.707 [2024-12-05 10:02:46.264043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.708 [2024-12-05 10:02:46.264204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.708 [2024-12-05 10:02:46.264231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.708 [2024-12-05 10:02:46.271445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.708 [2024-12-05 10:02:46.271596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.708 [2024-12-05 10:02:46.271623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.708 [2024-12-05 10:02:46.278973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.708 [2024-12-05 10:02:46.279164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.708 [2024-12-05 10:02:46.279191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.708 [2024-12-05 10:02:46.285944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.708 [2024-12-05 10:02:46.286143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.708 [2024-12-05 10:02:46.286171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.708 [2024-12-05 10:02:46.292743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.708 [2024-12-05 10:02:46.292877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.708 [2024-12-05 10:02:46.292904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.708 [2024-12-05 10:02:46.299670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.708 [2024-12-05 10:02:46.299842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.708 [2024-12-05 10:02:46.299870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.708 [2024-12-05 10:02:46.306589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.708 [2024-12-05 10:02:46.306736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.708 [2024-12-05 10:02:46.306763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.708 [2024-12-05 10:02:46.313420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.708 [2024-12-05 10:02:46.313606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.708 [2024-12-05 10:02:46.313633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.708 [2024-12-05 10:02:46.320248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.708 [2024-12-05 10:02:46.320385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.708 [2024-12-05 10:02:46.320412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.708 [2024-12-05 10:02:46.327029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.708 [2024-12-05 10:02:46.327159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.708 [2024-12-05 10:02:46.327187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.708 [2024-12-05 10:02:46.333982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.708 [2024-12-05 10:02:46.334138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.334167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.340908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.341069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.341096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.347655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.347845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.347871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.354566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.354728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.354756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.361335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.361470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.361496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.368089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.368223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.368250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.374774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.374974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.375001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.381383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.381516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.381543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.388201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.388332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.388359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.394981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.395132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.395159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.401782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.401916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.401957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.408739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.408956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.408984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.415730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.415898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.415925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.422577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.422764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.422791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.429394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.429529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.429556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.436287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.436416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.436444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.443076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.443293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.443320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.449853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.450002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.450029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.456805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.456997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.457024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.463646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.463839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.463866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.470459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.470612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.470670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.477305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.477442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.477471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.484162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.484314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.484341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.491015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.491212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.491239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.497789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.497993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.966 [2024-12-05 10:02:46.498021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.966 [2024-12-05 10:02:46.504602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.966 [2024-12-05 10:02:46.504735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.967 [2024-12-05 10:02:46.504762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.967 [2024-12-05 10:02:46.511425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.967 [2024-12-05 10:02:46.511574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.967 [2024-12-05 10:02:46.511600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.967 [2024-12-05 10:02:46.518241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.967 [2024-12-05 10:02:46.518398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.967 [2024-12-05 10:02:46.518425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.967 [2024-12-05 10:02:46.525130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.967 [2024-12-05 10:02:46.525288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.967 [2024-12-05 10:02:46.525315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.967 [2024-12-05 10:02:46.531992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.967 [2024-12-05 10:02:46.532136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.967 [2024-12-05 10:02:46.532162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.967 [2024-12-05 10:02:46.538998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.967 [2024-12-05 10:02:46.539132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.967 [2024-12-05 10:02:46.539160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.967 [2024-12-05 10:02:46.545954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.967 [2024-12-05 10:02:46.546093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.967 [2024-12-05 10:02:46.546120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.967 [2024-12-05 10:02:46.552730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.967 [2024-12-05 10:02:46.552868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.967 [2024-12-05 10:02:46.552895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.967 [2024-12-05 10:02:46.559551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.967 [2024-12-05 10:02:46.559688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.967 [2024-12-05 10:02:46.559715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:58.967 [2024-12-05 10:02:46.566255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.967 [2024-12-05 10:02:46.566419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.967 [2024-12-05 10:02:46.566447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:58.967 [2024-12-05 10:02:46.573113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.967 [2024-12-05 10:02:46.573284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.967 [2024-12-05 10:02:46.573311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:58.967 [2024-12-05 10:02:46.579902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.967 [2024-12-05 10:02:46.580056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.967 [2024-12-05 10:02:46.580083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:58.967 [2024-12-05 10:02:46.586755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:58.967 [2024-12-05 10:02:46.586951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.967 [2024-12-05 10:02:46.586978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.225 [2024-12-05 10:02:46.593732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.225 [2024-12-05 10:02:46.593883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.225 [2024-12-05 10:02:46.593911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.225 [2024-12-05 10:02:46.601160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.225 [2024-12-05 10:02:46.601312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.225 [2024-12-05 10:02:46.601340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.225 [2024-12-05 10:02:46.609170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.225 [2024-12-05 10:02:46.609363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.225 [2024-12-05 10:02:46.609391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.225 [2024-12-05 10:02:46.616356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.225 [2024-12-05 10:02:46.616538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.225 [2024-12-05 10:02:46.616565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.225 [2024-12-05 10:02:46.623274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.225 [2024-12-05 10:02:46.623411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.225 [2024-12-05 10:02:46.623439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.225 [2024-12-05 10:02:46.630099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.225 [2024-12-05 10:02:46.630227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.225 [2024-12-05 10:02:46.630255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.225 [2024-12-05 10:02:46.636774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.225 [2024-12-05 10:02:46.636903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.225 [2024-12-05 10:02:46.636944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.225 [2024-12-05 10:02:46.643667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.225 [2024-12-05 10:02:46.643800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.225 [2024-12-05 10:02:46.643827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.650439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.650607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.650644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.657242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.657392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.657419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.664089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.664258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.664286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.670882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.671031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.671059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.677742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.677909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.677950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.684591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.684778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.684805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.691349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.691532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.691559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.698141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.698390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.698418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.704850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.705011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.705038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.711819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.711963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.711991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.718634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.718774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.718801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.725392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.725602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.725629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.732277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.732456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.732483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.739260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.739390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.739418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.746109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.746290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.746336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.752824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.752996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.753023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.759715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.759900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.759939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.766702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.766856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.766883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.773599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.773805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.773832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.780502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.780677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.780704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.787419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.787549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.787577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.794083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.794279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.794314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.800772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.800903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.800941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.807576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.807711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.807739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.814335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.814487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.814514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.821066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.821225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.821252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.828045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.226 [2024-12-05 10:02:46.828180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.226 [2024-12-05 10:02:46.828207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.226 [2024-12-05 10:02:46.834755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.227 [2024-12-05 10:02:46.834889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.227 [2024-12-05 10:02:46.834915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.227 [2024-12-05 10:02:46.841526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.227 [2024-12-05 10:02:46.841673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.227 [2024-12-05 10:02:46.841700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.227 [2024-12-05 10:02:46.848455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.227 [2024-12-05 10:02:46.848606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.227 [2024-12-05 10:02:46.848633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.484 [2024-12-05 10:02:46.855583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.484 [2024-12-05 10:02:46.855770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.484 [2024-12-05 10:02:46.855798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.484 [2024-12-05 10:02:46.862450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.484 [2024-12-05 10:02:46.862685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.484 [2024-12-05 10:02:46.862712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.484 [2024-12-05 10:02:46.869308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.484 [2024-12-05 10:02:46.869500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.484 [2024-12-05 10:02:46.869528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.484 [2024-12-05 10:02:46.876079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.484 [2024-12-05 10:02:46.876192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.484 [2024-12-05 10:02:46.876218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.484 [2024-12-05 10:02:46.882728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.484 [2024-12-05 10:02:46.882896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.484 [2024-12-05 10:02:46.882923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.484 [2024-12-05 10:02:46.889461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.484 [2024-12-05 10:02:46.889592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.484 [2024-12-05 10:02:46.889619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.484 [2024-12-05 10:02:46.896215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.484 [2024-12-05 10:02:46.896339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.484 [2024-12-05 10:02:46.896367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.484 [2024-12-05 10:02:46.902977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.484 [2024-12-05 10:02:46.903125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.484 [2024-12-05 10:02:46.903152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.484 [2024-12-05 10:02:46.910510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.484 [2024-12-05 10:02:46.910712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.484 [2024-12-05 10:02:46.910739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.484 [2024-12-05 10:02:46.917813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.485 [2024-12-05 10:02:46.917989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.485 [2024-12-05 10:02:46.918017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.485 [2024-12-05 10:02:46.925343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.485 [2024-12-05 10:02:46.925498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.485 [2024-12-05 10:02:46.925525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.485 [2024-12-05 10:02:46.933055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.485 [2024-12-05 10:02:46.933240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.485 [2024-12-05 10:02:46.933269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.485 [2024-12-05 10:02:46.940820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.485 [2024-12-05 10:02:46.941021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.485 [2024-12-05 10:02:46.941050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.485 4445.50 IOPS, 555.69 MiB/s [2024-12-05T10:02:47.114Z] [2024-12-05 10:02:46.948728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:30:59.485 [2024-12-05 10:02:46.948949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.485 [2024-12-05 10:02:46.948987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.485 00:30:59.485 Latency(us) 00:30:59.485 [2024-12-05T10:02:47.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.485 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:59.485 nvme0n1 : 2.01 4443.28 555.41 0.00 0.00 3593.09 1921.40 8162.21 00:30:59.485 [2024-12-05T10:02:47.114Z] =================================================================================================================== 00:30:59.485 [2024-12-05T10:02:47.114Z] Total : 4443.28 555.41 0.00 0.00 3593.09 1921.40 8162.21 00:30:59.485 { 00:30:59.485 "results": [ 00:30:59.485 { 00:30:59.485 "job": "nvme0n1", 00:30:59.485 "core_mask": "0x2", 00:30:59.485 "workload": "randwrite", 00:30:59.485 "status": "finished", 00:30:59.485 "queue_depth": 16, 00:30:59.485 "io_size": 131072, 00:30:59.485 "runtime": 2.0055, 00:30:59.485 "iops": 4443.280977312391, 00:30:59.485 "mibps": 555.4101221640489, 00:30:59.485 "io_failed": 0, 00:30:59.485 "io_timeout": 0, 00:30:59.485 "avg_latency_us": 3593.0872021301557, 00:30:59.485 "min_latency_us": 1921.3963636363637, 00:30:59.485 "max_latency_us": 8162.210909090909 00:30:59.485 } 00:30:59.485 ], 00:30:59.485 "core_count": 1 00:30:59.485 } 00:30:59.485 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:59.485 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:59.485 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:59.485 | .driver_specific 00:30:59.485 | .nvme_error 00:30:59.485 | .status_code 00:30:59.485 | .command_transient_transport_error' 00:30:59.485 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:59.742 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 288 > 0 )) 00:30:59.742 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 105133 00:30:59.742 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 105133 ']' 00:30:59.742 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 105133 00:30:59.742 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:59.742 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:59.742 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105133 00:30:59.742 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:59.742 killing process with pid 105133 00:30:59.742 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:59.742 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105133' 00:30:59.742 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 105133 00:30:59.742 Received shutdown signal, test time was about 2.000000 seconds 00:30:59.742 00:30:59.742 Latency(us) 00:30:59.742 [2024-12-05T10:02:47.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.742 [2024-12-05T10:02:47.371Z] =================================================================================================================== 00:30:59.742 [2024-12-05T10:02:47.371Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:59.742 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 105133 00:31:00.677 10:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 104809 00:31:00.677 10:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 104809 ']' 00:31:00.677 10:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 104809 00:31:00.677 10:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:00.677 10:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:00.677 10:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104809 00:31:00.677 10:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:00.677 10:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:00.677 killing process with pid 104809 00:31:00.677 10:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104809' 00:31:00.677 10:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 104809 00:31:00.677 10:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 104809 00:31:01.614 00:31:01.614 real 0m22.114s 00:31:01.614 user 0m40.734s 00:31:01.614 sys 0m5.666s 00:31:01.614 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:01.614 ************************************ 00:31:01.614 END TEST nvmf_digest_error 00:31:01.614 ************************************ 00:31:01.614 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:01.614 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:31:01.614 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:31:01.614 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:01.614 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:01.873 rmmod nvme_tcp 00:31:01.873 rmmod nvme_fabrics 00:31:01.873 rmmod nvme_keyring 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 104809 ']' 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 104809 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 104809 ']' 00:31:01.873 Process with pid 104809 is not found 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 104809 00:31:01.873 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (104809) - No such process 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 104809 is not found' 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:01.873 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:02.132 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:02.132 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:02.132 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.132 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:02.132 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.132 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:31:02.132 00:31:02.132 real 0m46.436s 00:31:02.132 user 1m23.752s 00:31:02.132 sys 0m11.785s 00:31:02.132 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:02.132 10:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:02.132 ************************************ 00:31:02.132 END TEST nvmf_digest 00:31:02.132 ************************************ 00:31:02.132 10:02:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:31:02.132 10:02:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:31:02.132 10:02:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:31:02.132 10:02:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:02.132 10:02:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:02.132 10:02:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.132 ************************************ 00:31:02.132 START TEST nvmf_mdns_discovery 00:31:02.132 ************************************ 00:31:02.132 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:31:02.132 * Looking for test storage... 00:31:02.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:31:02.132 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:02.132 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:31:02.132 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:02.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.392 --rc genhtml_branch_coverage=1 00:31:02.392 --rc genhtml_function_coverage=1 00:31:02.392 --rc genhtml_legend=1 00:31:02.392 --rc geninfo_all_blocks=1 00:31:02.392 --rc geninfo_unexecuted_blocks=1 00:31:02.392 00:31:02.392 ' 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:02.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.392 --rc genhtml_branch_coverage=1 00:31:02.392 --rc genhtml_function_coverage=1 00:31:02.392 --rc genhtml_legend=1 00:31:02.392 --rc geninfo_all_blocks=1 00:31:02.392 --rc geninfo_unexecuted_blocks=1 00:31:02.392 00:31:02.392 ' 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:02.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.392 --rc genhtml_branch_coverage=1 00:31:02.392 --rc genhtml_function_coverage=1 00:31:02.392 --rc genhtml_legend=1 00:31:02.392 --rc geninfo_all_blocks=1 00:31:02.392 --rc geninfo_unexecuted_blocks=1 00:31:02.392 00:31:02.392 ' 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:02.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.392 --rc genhtml_branch_coverage=1 00:31:02.392 --rc genhtml_function_coverage=1 00:31:02.392 --rc genhtml_legend=1 00:31:02.392 --rc geninfo_all_blocks=1 00:31:02.392 --rc geninfo_unexecuted_blocks=1 00:31:02.392 00:31:02.392 ' 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:02.392 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:02.392 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:02.393 Cannot find device "nvmf_init_br" 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:02.393 Cannot find device "nvmf_init_br2" 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:02.393 Cannot find device "nvmf_tgt_br" 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:02.393 Cannot find device "nvmf_tgt_br2" 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:02.393 Cannot find device "nvmf_init_br" 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:02.393 Cannot find device "nvmf_init_br2" 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:02.393 Cannot find device "nvmf_tgt_br" 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:02.393 Cannot find device "nvmf_tgt_br2" 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:02.393 Cannot find device "nvmf_br" 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:02.393 Cannot find device "nvmf_init_if" 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:02.393 Cannot find device "nvmf_init_if2" 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:02.393 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:02.393 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:02.393 10:02:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:02.393 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:02.393 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:02.652 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:02.652 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:31:02.652 00:31:02.652 --- 10.0.0.3 ping statistics --- 00:31:02.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.652 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:02.652 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:02.652 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:31:02.652 00:31:02.652 --- 10.0.0.4 ping statistics --- 00:31:02.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.652 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:02.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:02.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:31:02.652 00:31:02.652 --- 10.0.0.1 ping statistics --- 00:31:02.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.652 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:02.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:02.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:31:02.652 00:31:02.652 --- 10.0.0.2 ping statistics --- 00:31:02.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.652 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@461 -- # return 0 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # nvmfpid=105493 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # waitforlisten 105493 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 105493 ']' 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:02.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:02.652 10:02:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.910 [2024-12-05 10:02:50.405129] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:31:02.910 [2024-12-05 10:02:50.405299] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:03.168 [2024-12-05 10:02:50.599909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.168 [2024-12-05 10:02:50.760147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:03.168 [2024-12-05 10:02:50.760240] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:03.168 [2024-12-05 10:02:50.760265] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:03.168 [2024-12-05 10:02:50.760304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:03.169 [2024-12-05 10:02:50.760325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:03.169 [2024-12-05 10:02:50.761978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.104 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:04.104 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:31:04.104 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:04.104 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:04.104 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.104 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:04.104 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:31:04.104 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.104 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.104 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.104 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:31:04.104 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.104 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.363 [2024-12-05 10:02:51.770269] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.363 [2024-12-05 10:02:51.778514] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.363 null0 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.363 null1 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.363 null2 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.363 null3 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=105549 00:31:04.363 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:04.364 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 105549 /tmp/host.sock 00:31:04.364 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 105549 ']' 00:31:04.364 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:31:04.364 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:04.364 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:04.364 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:04.364 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:04.364 10:02:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.364 [2024-12-05 10:02:51.950619] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:31:04.364 [2024-12-05 10:02:51.950803] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105549 ] 00:31:04.622 [2024-12-05 10:02:52.138948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.881 [2024-12-05 10:02:52.282430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.449 10:02:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:05.449 10:02:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:31:05.449 10:02:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:31:05.449 10:02:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:31:05.449 10:02:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:31:05.449 10:02:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=105579 00:31:05.449 10:02:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:31:05.449 10:02:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:31:05.449 10:02:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:31:05.449 Process 1068 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:31:05.449 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:31:05.449 Successfully dropped root privileges. 00:31:05.449 avahi-daemon 0.8 starting up. 00:31:05.449 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:31:05.449 Successfully called chroot(). 00:31:05.449 Successfully dropped remaining capabilities. 00:31:06.386 No service file found in /etc/avahi/services. 00:31:06.386 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:31:06.386 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:31:06.386 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:31:06.386 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:31:06.386 Network interface enumeration completed. 00:31:06.386 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:31:06.386 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 00:31:06.386 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:31:06.386 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 00:31:06.386 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 1481263550. 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:31:06.386 10:02:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.645 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:31:06.645 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:06.645 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.645 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.645 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.645 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:31:06.645 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:06.645 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:31:06.645 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.645 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:31:06.646 [2024-12-05 10:02:54.217712] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.646 [2024-12-05 10:02:54.263625] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.646 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.905 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.905 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:31:06.905 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.905 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.905 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.905 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:31:06.905 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.905 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.905 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.905 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:31:06.905 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.905 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.905 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.905 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:31:06.905 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.905 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.905 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.905 10:02:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:31:07.904 [2024-12-05 10:02:55.117707] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:31:08.170 [2024-12-05 10:02:55.517731] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:31:08.170 [2024-12-05 10:02:55.517779] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:31:08.170 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:31:08.170 cookie is 0 00:31:08.170 is_local: 1 00:31:08.170 our_own: 0 00:31:08.170 wide_area: 0 00:31:08.170 multicast: 1 00:31:08.170 cached: 1 00:31:08.170 [2024-12-05 10:02:55.617722] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:31:08.170 [2024-12-05 10:02:55.617755] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:31:08.170 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:31:08.170 cookie is 0 00:31:08.170 is_local: 1 00:31:08.170 our_own: 0 00:31:08.170 wide_area: 0 00:31:08.170 multicast: 1 00:31:08.170 cached: 1 00:31:09.106 [2024-12-05 10:02:56.518866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.106 [2024-12-05 10:02:56.518983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.4, port=8009 00:31:09.106 [2024-12-05 10:02:56.519065] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:09.106 [2024-12-05 10:02:56.519099] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:09.106 [2024-12-05 10:02:56.519116] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:31:09.106 [2024-12-05 10:02:56.627294] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:31:09.106 [2024-12-05 10:02:56.627344] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:31:09.106 [2024-12-05 10:02:56.627378] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:31:09.106 [2024-12-05 10:02:56.713445] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0 00:31:09.365 [2024-12-05 10:02:56.776104] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:31:09.365 [2024-12-05 10:02:56.777479] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:31:09.365 [2024-12-05 10:02:56.779778] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:31:09.365 [2024-12-05 10:02:56.779816] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:31:09.365 [2024-12-05 10:02:56.785790] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:31:09.933 [2024-12-05 10:02:57.518691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-12-05 10:02:57.518741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b780 with addr=10.0.0.4, port=8009 00:31:09.933 [2024-12-05 10:02:57.518790] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:09.933 [2024-12-05 10:02:57.518806] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:09.933 [2024-12-05 10:02:57.518818] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:31:11.310 [2024-12-05 10:02:58.518739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.310 [2024-12-05 10:02:58.518796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ba00 with addr=10.0.0.4, port=8009 00:31:11.310 [2024-12-05 10:02:58.518852] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:11.310 [2024-12-05 10:02:58.518869] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:11.310 [2024-12-05 10:02:58.518883] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:31:11.877 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:31:11.877 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:31:11.877 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:31:11.877 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:31:11.877 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:31:11.877 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:31:11.877 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:31:11.877 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:31:11.877 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:31:11.877 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:31:11.877 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:31:11.877 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:31:11.877 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:11.877 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:31:11.877 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:11.878 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:31:11.878 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:11.878 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:31:11.878 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:11.878 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:31:11.878 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:31:11.878 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:31:11.878 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:31:11.878 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.878 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.878 [2024-12-05 10:02:59.346094] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:31:11.878 [2024-12-05 10:02:59.347935] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:31:11.878 [2024-12-05 10:02:59.348036] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:31:11.878 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.878 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:31:11.878 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.878 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.878 [2024-12-05 10:02:59.354042] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:31:11.878 [2024-12-05 10:02:59.355002] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:31:11.878 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.878 10:02:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:31:11.878 [2024-12-05 10:02:59.488052] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:31:11.878 [2024-12-05 10:02:59.488108] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:31:12.135 [2024-12-05 10:02:59.527508] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:31:12.135 [2024-12-05 10:02:59.527542] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:31:12.135 [2024-12-05 10:02:59.527580] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:31:12.135 [2024-12-05 10:02:59.574705] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:31:12.135 [2024-12-05 10:02:59.613641] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:31:12.135 [2024-12-05 10:02:59.668252] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 00:31:12.135 [2024-12-05 10:02:59.669395] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x61500002c680:1 started. 00:31:12.135 [2024-12-05 10:02:59.671557] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:31:12.135 [2024-12-05 10:02:59.671593] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:31:12.135 [2024-12-05 10:02:59.676467] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x61500002c680 was disconnected and freed. delete nvme_qpair. 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:31:13.069 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:31:13.069 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:31:13.069 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:31:13.069 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:31:13.069 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:31:13.069 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:31:13.069 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:13.069 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:31:13.070 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.330 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:31:13.330 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:31:13.330 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:13.330 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:31:13.330 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.330 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.330 [2024-12-05 10:03:00.717762] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:31:13.330 [2024-12-05 10:03:00.717805] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:31:13.330 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:31:13.330 cookie is 0 00:31:13.330 is_local: 1 00:31:13.330 our_own: 0 00:31:13.330 wide_area: 0 00:31:13.330 multicast: 1 00:31:13.330 cached: 1 00:31:13.330 [2024-12-05 10:03:00.717839] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:31:13.330 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.330 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:31:13.330 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:31:13.330 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:31:13.330 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:13.330 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.330 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.330 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.330 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:31:13.330 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.330 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.330 [2024-12-05 10:03:00.779656] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002c900:1 started. 00:31:13.330 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.330 10:03:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:31:13.330 [2024-12-05 10:03:00.784572] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x61500002cb80:1 started. 00:31:13.330 [2024-12-05 10:03:00.786689] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002c900 was disconnected and freed. delete nvme_qpair. 00:31:13.330 [2024-12-05 10:03:00.796902] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x61500002cb80 was disconnected and freed. delete nvme_qpair. 00:31:13.589 [2024-12-05 10:03:01.017764] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:31:13.589 [2024-12-05 10:03:01.017814] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:31:13.589 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:31:13.589 cookie is 0 00:31:13.589 is_local: 1 00:31:13.589 our_own: 0 00:31:13.589 wide_area: 0 00:31:13.589 multicast: 1 00:31:13.589 cached: 1 00:31:13.589 [2024-12-05 10:03:01.017836] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.526 [2024-12-05 10:03:01.908186] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:31:14.526 [2024-12-05 10:03:01.908893] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:31:14.526 [2024-12-05 10:03:01.908990] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:31:14.526 [2024-12-05 10:03:01.909079] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:31:14.526 [2024-12-05 10:03:01.909126] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.526 [2024-12-05 10:03:01.916203] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:31:14.526 [2024-12-05 10:03:01.916991] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:31:14.526 [2024-12-05 10:03:01.917198] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.526 10:03:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:31:14.526 [2024-12-05 10:03:02.049060] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0 00:31:14.526 [2024-12-05 10:03:02.049752] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:31:14.526 [2024-12-05 10:03:02.111792] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:31:14.526 [2024-12-05 10:03:02.111918] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:31:14.526 [2024-12-05 10:03:02.111940] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:31:14.526 [2024-12-05 10:03:02.111967] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:31:14.526 [2024-12-05 10:03:02.112032] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:31:14.526 [2024-12-05 10:03:02.112298] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 00:31:14.527 [2024-12-05 10:03:02.112368] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:31:14.527 [2024-12-05 10:03:02.112388] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:31:14.527 [2024-12-05 10:03:02.112398] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:31:14.527 [2024-12-05 10:03:02.112426] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:31:14.785 [2024-12-05 10:03:02.157592] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:31:14.785 [2024-12-05 10:03:02.157624] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:31:14.785 [2024-12-05 10:03:02.157708] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:31:14.785 [2024-12-05 10:03:02.157723] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:31:15.353 10:03:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:31:15.353 10:03:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:15.353 10:03:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:31:15.353 10:03:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:31:15.353 10:03:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.353 10:03:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.353 10:03:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:31:15.353 10:03:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.353 10:03:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:31:15.612 10:03:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:31:15.612 10:03:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:15.612 10:03:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.612 10:03:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:31:15.612 10:03:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.612 10:03:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:31:15.612 10:03:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:31:15.612 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.613 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.613 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:31:15.613 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.613 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:31:15.613 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:31:15.613 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:31:15.613 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:31:15.613 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.613 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.613 [2024-12-05 10:03:03.229921] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:31:15.613 [2024-12-05 10:03:03.230024] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:31:15.613 [2024-12-05 10:03:03.230090] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:31:15.613 [2024-12-05 10:03:03.230162] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:31:15.613 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.613 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:31:15.613 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.613 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.613 [2024-12-05 10:03:03.239654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.613 [2024-12-05 10:03:03.239705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.613 [2024-12-05 10:03:03.239727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.613 [2024-12-05 10:03:03.239742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.613 [2024-12-05 10:03:03.239756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.613 [2024-12-05 10:03:03.239770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.613 [2024-12-05 10:03:03.239800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.613 [2024-12-05 10:03:03.239831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.613 [2024-12-05 10:03:03.239845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:31:15.874 [2024-12-05 10:03:03.242016] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:31:15.874 [2024-12-05 10:03:03.242121] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:31:15.874 [2024-12-05 10:03:03.242639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.874 [2024-12-05 10:03:03.242679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.874 [2024-12-05 10:03:03.242714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.875 [2024-12-05 10:03:03.242730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.875 [2024-12-05 10:03:03.242745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.875 [2024-12-05 10:03:03.242758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.875 [2024-12-05 10:03:03.242774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.875 [2024-12-05 10:03:03.242788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.875 [2024-12-05 10:03:03.242802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:31:15.875 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.875 10:03:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:31:15.875 [2024-12-05 10:03:03.249598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:31:15.875 [2024-12-05 10:03:03.252559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:31:15.875 [2024-12-05 10:03:03.259606] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:15.875 [2024-12-05 10:03:03.259663] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:15.875 [2024-12-05 10:03:03.259677] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:15.875 [2024-12-05 10:03:03.259698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:15.875 [2024-12-05 10:03:03.259745] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:15.875 [2024-12-05 10:03:03.259863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.875 [2024-12-05 10:03:03.259905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:31:15.875 [2024-12-05 10:03:03.259925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:31:15.875 [2024-12-05 10:03:03.259989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:31:15.875 [2024-12-05 10:03:03.260043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:15.875 [2024-12-05 10:03:03.260064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:15.875 [2024-12-05 10:03:03.260083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:15.875 [2024-12-05 10:03:03.260098] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:15.875 [2024-12-05 10:03:03.260109] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:15.875 [2024-12-05 10:03:03.260119] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:15.875 [2024-12-05 10:03:03.262569] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:31:15.875 [2024-12-05 10:03:03.262625] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:31:15.875 [2024-12-05 10:03:03.262654] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:31:15.875 [2024-12-05 10:03:03.262680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:31:15.875 [2024-12-05 10:03:03.262717] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:31:15.875 [2024-12-05 10:03:03.262792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.875 [2024-12-05 10:03:03.262829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:31:15.875 [2024-12-05 10:03:03.262848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:31:15.875 [2024-12-05 10:03:03.262873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:31:15.875 [2024-12-05 10:03:03.262897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:31:15.875 [2024-12-05 10:03:03.262911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:31:15.875 [2024-12-05 10:03:03.262924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:31:15.875 [2024-12-05 10:03:03.262972] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:31:15.875 [2024-12-05 10:03:03.262983] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:31:15.875 [2024-12-05 10:03:03.262990] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:31:15.875 [2024-12-05 10:03:03.269761] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:15.875 [2024-12-05 10:03:03.269816] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:15.875 [2024-12-05 10:03:03.269829] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:15.875 [2024-12-05 10:03:03.269838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:15.875 [2024-12-05 10:03:03.269876] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:15.875 [2024-12-05 10:03:03.269995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.875 [2024-12-05 10:03:03.270044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:31:15.875 [2024-12-05 10:03:03.270063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:31:15.875 [2024-12-05 10:03:03.270097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:31:15.875 [2024-12-05 10:03:03.270148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:15.875 [2024-12-05 10:03:03.270168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:15.875 [2024-12-05 10:03:03.270182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:15.875 [2024-12-05 10:03:03.270197] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:15.875 [2024-12-05 10:03:03.270207] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:15.875 [2024-12-05 10:03:03.270216] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:15.875 [2024-12-05 10:03:03.272731] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:31:15.875 [2024-12-05 10:03:03.272784] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:31:15.875 [2024-12-05 10:03:03.272797] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:31:15.875 [2024-12-05 10:03:03.272806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:31:15.875 [2024-12-05 10:03:03.272853] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:31:15.875 [2024-12-05 10:03:03.272959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.875 [2024-12-05 10:03:03.273009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:31:15.875 [2024-12-05 10:03:03.273028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:31:15.875 [2024-12-05 10:03:03.273055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:31:15.875 [2024-12-05 10:03:03.273080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:31:15.876 [2024-12-05 10:03:03.273095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:31:15.876 [2024-12-05 10:03:03.273109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:31:15.876 [2024-12-05 10:03:03.273122] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:31:15.876 [2024-12-05 10:03:03.273132] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:31:15.876 [2024-12-05 10:03:03.273140] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:31:15.876 [2024-12-05 10:03:03.279890] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:15.876 [2024-12-05 10:03:03.279956] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:15.876 [2024-12-05 10:03:03.279970] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:15.876 [2024-12-05 10:03:03.279979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:15.876 [2024-12-05 10:03:03.280014] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:15.876 [2024-12-05 10:03:03.280085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.876 [2024-12-05 10:03:03.280130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:31:15.876 [2024-12-05 10:03:03.280164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:31:15.876 [2024-12-05 10:03:03.280190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:31:15.876 [2024-12-05 10:03:03.280237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:15.876 [2024-12-05 10:03:03.280256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:15.876 [2024-12-05 10:03:03.280271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:15.876 [2024-12-05 10:03:03.280285] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:15.876 [2024-12-05 10:03:03.280294] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:15.876 [2024-12-05 10:03:03.280303] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:15.876 [2024-12-05 10:03:03.282867] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:31:15.876 [2024-12-05 10:03:03.282920] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:31:15.876 [2024-12-05 10:03:03.282933] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:31:15.876 [2024-12-05 10:03:03.282956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:31:15.876 [2024-12-05 10:03:03.282995] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:31:15.876 [2024-12-05 10:03:03.283066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.876 [2024-12-05 10:03:03.283096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:31:15.876 [2024-12-05 10:03:03.283114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:31:15.876 [2024-12-05 10:03:03.283155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:31:15.876 [2024-12-05 10:03:03.283178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:31:15.876 [2024-12-05 10:03:03.283193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:31:15.876 [2024-12-05 10:03:03.283207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:31:15.876 [2024-12-05 10:03:03.283220] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:31:15.876 [2024-12-05 10:03:03.283230] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:31:15.876 [2024-12-05 10:03:03.283238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:31:15.876 [2024-12-05 10:03:03.290031] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:15.876 [2024-12-05 10:03:03.290106] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:15.876 [2024-12-05 10:03:03.290120] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:15.876 [2024-12-05 10:03:03.290129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:15.876 [2024-12-05 10:03:03.290181] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:15.876 [2024-12-05 10:03:03.290262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.876 [2024-12-05 10:03:03.290337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:31:15.876 [2024-12-05 10:03:03.290368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:31:15.876 [2024-12-05 10:03:03.290396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:31:15.876 [2024-12-05 10:03:03.290446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:15.876 [2024-12-05 10:03:03.290472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:15.876 [2024-12-05 10:03:03.290487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:15.876 [2024-12-05 10:03:03.290501] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:15.876 [2024-12-05 10:03:03.290512] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:15.876 [2024-12-05 10:03:03.290521] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:15.876 [2024-12-05 10:03:03.293011] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:31:15.876 [2024-12-05 10:03:03.293066] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:31:15.876 [2024-12-05 10:03:03.293079] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:31:15.876 [2024-12-05 10:03:03.293088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:31:15.876 [2024-12-05 10:03:03.293120] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:31:15.876 [2024-12-05 10:03:03.293193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.876 [2024-12-05 10:03:03.293225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:31:15.876 [2024-12-05 10:03:03.293243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:31:15.876 [2024-12-05 10:03:03.293270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:31:15.876 [2024-12-05 10:03:03.293308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:31:15.876 [2024-12-05 10:03:03.293324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:31:15.876 [2024-12-05 10:03:03.293338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:31:15.876 [2024-12-05 10:03:03.293352] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:31:15.876 [2024-12-05 10:03:03.293361] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:31:15.876 [2024-12-05 10:03:03.293370] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:31:15.876 [2024-12-05 10:03:03.300197] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:15.876 [2024-12-05 10:03:03.300252] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:15.876 [2024-12-05 10:03:03.300265] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:15.877 [2024-12-05 10:03:03.300273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:15.877 [2024-12-05 10:03:03.300304] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:15.877 [2024-12-05 10:03:03.300409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.877 [2024-12-05 10:03:03.300473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:31:15.877 [2024-12-05 10:03:03.300492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:31:15.877 [2024-12-05 10:03:03.300517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:31:15.877 [2024-12-05 10:03:03.300579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:15.877 [2024-12-05 10:03:03.300599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:15.877 [2024-12-05 10:03:03.300614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:15.877 [2024-12-05 10:03:03.300627] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:15.877 [2024-12-05 10:03:03.300637] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:15.877 [2024-12-05 10:03:03.300646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:15.877 [2024-12-05 10:03:03.303138] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:31:15.877 [2024-12-05 10:03:03.303195] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:31:15.877 [2024-12-05 10:03:03.303214] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:31:15.877 [2024-12-05 10:03:03.303223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:31:15.877 [2024-12-05 10:03:03.303254] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:31:15.877 [2024-12-05 10:03:03.303333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.877 [2024-12-05 10:03:03.303387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:31:15.877 [2024-12-05 10:03:03.303407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:31:15.877 [2024-12-05 10:03:03.303432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:31:15.877 [2024-12-05 10:03:03.303455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:31:15.877 [2024-12-05 10:03:03.303469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:31:15.877 [2024-12-05 10:03:03.303482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:31:15.877 [2024-12-05 10:03:03.303496] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:31:15.877 [2024-12-05 10:03:03.303521] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:31:15.877 [2024-12-05 10:03:03.303545] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:31:15.877 [2024-12-05 10:03:03.310328] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:15.877 [2024-12-05 10:03:03.310384] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:15.877 [2024-12-05 10:03:03.310397] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:15.877 [2024-12-05 10:03:03.310405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:15.877 [2024-12-05 10:03:03.310437] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:15.877 [2024-12-05 10:03:03.310508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.877 [2024-12-05 10:03:03.310538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:31:15.877 [2024-12-05 10:03:03.310556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:31:15.877 [2024-12-05 10:03:03.310581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:31:15.877 [2024-12-05 10:03:03.310621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:15.877 [2024-12-05 10:03:03.310636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:15.877 [2024-12-05 10:03:03.310650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:15.877 [2024-12-05 10:03:03.310663] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:15.877 [2024-12-05 10:03:03.310673] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:15.877 [2024-12-05 10:03:03.310681] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:15.877 [2024-12-05 10:03:03.313270] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:31:15.877 [2024-12-05 10:03:03.313308] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:31:15.877 [2024-12-05 10:03:03.313319] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:31:15.877 [2024-12-05 10:03:03.313329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:31:15.877 [2024-12-05 10:03:03.313375] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:31:15.877 [2024-12-05 10:03:03.313484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.877 [2024-12-05 10:03:03.313515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:31:15.877 [2024-12-05 10:03:03.313533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:31:15.877 [2024-12-05 10:03:03.313559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:31:15.877 [2024-12-05 10:03:03.313582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:31:15.877 [2024-12-05 10:03:03.313597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:31:15.877 [2024-12-05 10:03:03.313611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:31:15.877 [2024-12-05 10:03:03.313625] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:31:15.877 [2024-12-05 10:03:03.313635] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:31:15.877 [2024-12-05 10:03:03.313643] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:31:15.877 [2024-12-05 10:03:03.320452] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:15.877 [2024-12-05 10:03:03.320505] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:15.877 [2024-12-05 10:03:03.320518] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:15.877 [2024-12-05 10:03:03.320526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:15.877 [2024-12-05 10:03:03.320561] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:15.877 [2024-12-05 10:03:03.320631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.877 [2024-12-05 10:03:03.320660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:31:15.877 [2024-12-05 10:03:03.320693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:31:15.877 [2024-12-05 10:03:03.320734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:31:15.877 [2024-12-05 10:03:03.320785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:15.877 [2024-12-05 10:03:03.320805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:15.878 [2024-12-05 10:03:03.320820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:15.878 [2024-12-05 10:03:03.320833] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:15.878 [2024-12-05 10:03:03.320843] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:15.878 [2024-12-05 10:03:03.320851] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:15.878 [2024-12-05 10:03:03.323392] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:31:15.878 [2024-12-05 10:03:03.323430] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:31:15.878 [2024-12-05 10:03:03.323442] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:31:15.878 [2024-12-05 10:03:03.323467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:31:15.878 [2024-12-05 10:03:03.323502] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:31:15.878 [2024-12-05 10:03:03.323570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.878 [2024-12-05 10:03:03.323631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:31:15.878 [2024-12-05 10:03:03.323649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:31:15.878 [2024-12-05 10:03:03.323690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:31:15.878 [2024-12-05 10:03:03.323717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:31:15.878 [2024-12-05 10:03:03.323732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:31:15.878 [2024-12-05 10:03:03.323747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:31:15.878 [2024-12-05 10:03:03.323760] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:31:15.878 [2024-12-05 10:03:03.323771] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:31:15.878 [2024-12-05 10:03:03.323779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:31:15.878 [2024-12-05 10:03:03.330580] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:15.878 [2024-12-05 10:03:03.330685] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:15.878 [2024-12-05 10:03:03.330698] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:15.878 [2024-12-05 10:03:03.330707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:15.878 [2024-12-05 10:03:03.330752] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:15.878 [2024-12-05 10:03:03.330830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.878 [2024-12-05 10:03:03.330869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:31:15.878 [2024-12-05 10:03:03.330888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:31:15.878 [2024-12-05 10:03:03.330914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:31:15.878 [2024-12-05 10:03:03.331073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:15.878 [2024-12-05 10:03:03.331096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:15.878 [2024-12-05 10:03:03.331112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:15.878 [2024-12-05 10:03:03.331126] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:15.878 [2024-12-05 10:03:03.331136] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:15.878 [2024-12-05 10:03:03.331144] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:15.878 [2024-12-05 10:03:03.333518] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:31:15.878 [2024-12-05 10:03:03.333561] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:31:15.878 [2024-12-05 10:03:03.333574] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:31:15.878 [2024-12-05 10:03:03.333583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:31:15.878 [2024-12-05 10:03:03.333620] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:31:15.878 [2024-12-05 10:03:03.333712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.878 [2024-12-05 10:03:03.333743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:31:15.878 [2024-12-05 10:03:03.333761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:31:15.878 [2024-12-05 10:03:03.333786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:31:15.878 [2024-12-05 10:03:03.333810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:31:15.878 [2024-12-05 10:03:03.333824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:31:15.878 [2024-12-05 10:03:03.333840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:31:15.878 [2024-12-05 10:03:03.333853] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:31:15.878 [2024-12-05 10:03:03.333864] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:31:15.878 [2024-12-05 10:03:03.333872] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:31:15.878 [2024-12-05 10:03:03.340766] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:15.878 [2024-12-05 10:03:03.340822] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:15.878 [2024-12-05 10:03:03.340835] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:15.878 [2024-12-05 10:03:03.340843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:15.878 [2024-12-05 10:03:03.340880] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:15.878 [2024-12-05 10:03:03.341011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.878 [2024-12-05 10:03:03.341062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:31:15.878 [2024-12-05 10:03:03.341081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:31:15.878 [2024-12-05 10:03:03.341108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:31:15.878 [2024-12-05 10:03:03.341157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:15.878 [2024-12-05 10:03:03.341176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:15.878 [2024-12-05 10:03:03.341190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:15.878 [2024-12-05 10:03:03.341205] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:15.878 [2024-12-05 10:03:03.341215] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:15.878 [2024-12-05 10:03:03.341223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:15.878 [2024-12-05 10:03:03.343636] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:31:15.878 [2024-12-05 10:03:03.343675] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:31:15.878 [2024-12-05 10:03:03.343687] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:31:15.878 [2024-12-05 10:03:03.343696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:31:15.878 [2024-12-05 10:03:03.343730] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:31:15.879 [2024-12-05 10:03:03.343816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.879 [2024-12-05 10:03:03.343863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:31:15.879 [2024-12-05 10:03:03.343880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:31:15.879 [2024-12-05 10:03:03.343906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:31:15.879 [2024-12-05 10:03:03.343929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:31:15.879 [2024-12-05 10:03:03.343944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:31:15.879 [2024-12-05 10:03:03.343972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:31:15.879 [2024-12-05 10:03:03.343990] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:31:15.879 [2024-12-05 10:03:03.344000] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:31:15.879 [2024-12-05 10:03:03.344008] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:31:15.879 [2024-12-05 10:03:03.350895] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:15.879 [2024-12-05 10:03:03.350940] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:15.879 [2024-12-05 10:03:03.350951] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:15.879 [2024-12-05 10:03:03.350976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:15.879 [2024-12-05 10:03:03.351010] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:15.879 [2024-12-05 10:03:03.351101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.879 [2024-12-05 10:03:03.351132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:31:15.879 [2024-12-05 10:03:03.351150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:31:15.879 [2024-12-05 10:03:03.351175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:31:15.879 [2024-12-05 10:03:03.351225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:15.879 [2024-12-05 10:03:03.351245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:15.879 [2024-12-05 10:03:03.351260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:15.879 [2024-12-05 10:03:03.351273] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:15.879 [2024-12-05 10:03:03.351282] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:15.879 [2024-12-05 10:03:03.351291] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:15.879 [2024-12-05 10:03:03.353745] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:31:15.879 [2024-12-05 10:03:03.353797] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:31:15.879 [2024-12-05 10:03:03.353809] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:31:15.879 [2024-12-05 10:03:03.353817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:31:15.879 [2024-12-05 10:03:03.353860] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:31:15.879 [2024-12-05 10:03:03.353950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.879 [2024-12-05 10:03:03.354031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:31:15.879 [2024-12-05 10:03:03.354067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:31:15.879 [2024-12-05 10:03:03.354095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:31:15.879 [2024-12-05 10:03:03.354120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:31:15.879 [2024-12-05 10:03:03.354136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:31:15.879 [2024-12-05 10:03:03.354150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:31:15.879 [2024-12-05 10:03:03.354167] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:31:15.879 [2024-12-05 10:03:03.354177] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:31:15.879 [2024-12-05 10:03:03.354186] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:31:15.879 [2024-12-05 10:03:03.361025] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:15.879 [2024-12-05 10:03:03.361064] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:15.879 [2024-12-05 10:03:03.361075] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:15.879 [2024-12-05 10:03:03.361084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:15.879 [2024-12-05 10:03:03.361119] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:15.879 [2024-12-05 10:03:03.361188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.879 [2024-12-05 10:03:03.361250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:31:15.879 [2024-12-05 10:03:03.361267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:31:15.879 [2024-12-05 10:03:03.361293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:31:15.879 [2024-12-05 10:03:03.361344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:15.879 [2024-12-05 10:03:03.361363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:15.879 [2024-12-05 10:03:03.361378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:15.879 [2024-12-05 10:03:03.361392] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:15.879 [2024-12-05 10:03:03.361401] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:15.879 [2024-12-05 10:03:03.361410] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:15.879 [2024-12-05 10:03:03.363875] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:31:15.879 [2024-12-05 10:03:03.363914] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:31:15.879 [2024-12-05 10:03:03.363926] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:31:15.879 [2024-12-05 10:03:03.363950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:31:15.879 [2024-12-05 10:03:03.363985] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:31:15.879 [2024-12-05 10:03:03.364070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.879 [2024-12-05 10:03:03.364101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:31:15.879 [2024-12-05 10:03:03.364119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:31:15.879 [2024-12-05 10:03:03.364144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:31:15.879 [2024-12-05 10:03:03.364167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:31:15.879 [2024-12-05 10:03:03.364181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:31:15.879 [2024-12-05 10:03:03.364195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:31:15.879 [2024-12-05 10:03:03.364209] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:31:15.879 [2024-12-05 10:03:03.364218] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:31:15.880 [2024-12-05 10:03:03.364226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:31:15.880 [2024-12-05 10:03:03.371135] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:15.880 [2024-12-05 10:03:03.371188] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:15.880 [2024-12-05 10:03:03.371200] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:15.880 [2024-12-05 10:03:03.371208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:15.880 [2024-12-05 10:03:03.371251] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:15.880 [2024-12-05 10:03:03.371325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.880 [2024-12-05 10:03:03.371355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:31:15.880 [2024-12-05 10:03:03.371373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:31:15.880 [2024-12-05 10:03:03.371397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:31:15.880 [2024-12-05 10:03:03.371445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:15.880 [2024-12-05 10:03:03.371465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:15.880 [2024-12-05 10:03:03.371479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:15.880 [2024-12-05 10:03:03.371491] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:15.880 [2024-12-05 10:03:03.371501] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:15.880 [2024-12-05 10:03:03.371509] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:15.880 [2024-12-05 10:03:03.374001] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:31:15.880 [2024-12-05 10:03:03.374054] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:31:15.880 [2024-12-05 10:03:03.374082] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:31:15.880 [2024-12-05 10:03:03.374091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:31:15.880 [2024-12-05 10:03:03.374129] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:31:15.880 [2024-12-05 10:03:03.374202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.880 [2024-12-05 10:03:03.374231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c180 with addr=10.0.0.4, port=4420 00:31:15.880 [2024-12-05 10:03:03.374248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:31:15.880 [2024-12-05 10:03:03.374304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002c180 (9): Bad file descriptor 00:31:15.880 [2024-12-05 10:03:03.374362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:31:15.880 [2024-12-05 10:03:03.374378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:31:15.880 [2024-12-05 10:03:03.374392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:31:15.880 [2024-12-05 10:03:03.374407] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:31:15.880 [2024-12-05 10:03:03.374417] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:31:15.880 [2024-12-05 10:03:03.374425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:31:15.880 [2024-12-05 10:03:03.374495] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:31:15.880 [2024-12-05 10:03:03.374528] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:31:15.880 [2024-12-05 10:03:03.374575] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:31:15.880 [2024-12-05 10:03:03.374695] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:31:15.880 [2024-12-05 10:03:03.374723] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:31:15.880 [2024-12-05 10:03:03.374752] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:31:15.880 [2024-12-05 10:03:03.460595] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:31:15.880 [2024-12-05 10:03:03.460694] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:31:16.818 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.077 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:31:17.077 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:31:17.077 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:31:17.077 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.077 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.077 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:31:17.077 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.077 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:31:17.077 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:31:17.077 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:31:17.077 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:31:17.077 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.077 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.077 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.077 10:03:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:31:17.078 [2024-12-05 10:03:04.617780] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:31:18.021 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:31:18.021 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.022 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.279 [2024-12-05 10:03:05.756999] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:31:18.279 2024/12/05 10:03:05 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:31:18.279 request: 00:31:18.279 { 00:31:18.279 "method": "bdev_nvme_start_mdns_discovery", 00:31:18.279 "params": { 00:31:18.279 "name": "mdns", 00:31:18.279 "svcname": "_nvme-disc._http", 00:31:18.279 "hostnqn": "nqn.2021-12.io.spdk:test" 00:31:18.279 } 00:31:18.279 } 00:31:18.279 Got JSON-RPC error response 00:31:18.279 GoRPCClient: error on JSON-RPC call 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:18.279 10:03:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:31:18.845 [2024-12-05 10:03:06.345780] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:31:18.845 [2024-12-05 10:03:06.445777] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:31:19.102 [2024-12-05 10:03:06.545785] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:31:19.102 [2024-12-05 10:03:06.545819] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:31:19.102 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:31:19.102 cookie is 0 00:31:19.102 is_local: 1 00:31:19.102 our_own: 0 00:31:19.102 wide_area: 0 00:31:19.102 multicast: 1 00:31:19.102 cached: 1 00:31:19.102 [2024-12-05 10:03:06.645786] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:31:19.102 [2024-12-05 10:03:06.645820] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:31:19.102 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:31:19.102 cookie is 0 00:31:19.102 is_local: 1 00:31:19.102 our_own: 0 00:31:19.102 wide_area: 0 00:31:19.102 multicast: 1 00:31:19.102 cached: 1 00:31:19.102 [2024-12-05 10:03:06.645839] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:31:19.360 [2024-12-05 10:03:06.745785] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:31:19.360 [2024-12-05 10:03:06.745818] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:31:19.360 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:31:19.360 cookie is 0 00:31:19.360 is_local: 1 00:31:19.360 our_own: 0 00:31:19.360 wide_area: 0 00:31:19.360 multicast: 1 00:31:19.360 cached: 1 00:31:19.360 [2024-12-05 10:03:06.845786] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:31:19.360 [2024-12-05 10:03:06.845820] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:31:19.360 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:31:19.360 cookie is 0 00:31:19.360 is_local: 1 00:31:19.360 our_own: 0 00:31:19.360 wide_area: 0 00:31:19.360 multicast: 1 00:31:19.360 cached: 1 00:31:19.360 [2024-12-05 10:03:06.845850] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:31:20.296 [2024-12-05 10:03:07.560171] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:31:20.296 [2024-12-05 10:03:07.560209] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:31:20.296 [2024-12-05 10:03:07.560241] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:31:20.296 [2024-12-05 10:03:07.646323] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:31:20.296 [2024-12-05 10:03:07.712909] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 00:31:20.296 [2024-12-05 10:03:07.714057] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0x61500002e200:1 started. 00:31:20.296 [2024-12-05 10:03:07.716701] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:31:20.296 [2024-12-05 10:03:07.716740] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:31:20.296 [2024-12-05 10:03:07.719107] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0x61500002e200 was disconnected and freed. delete nvme_qpair. 00:31:20.296 [2024-12-05 10:03:07.759782] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:31:20.296 [2024-12-05 10:03:07.759817] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:31:20.296 [2024-12-05 10:03:07.759870] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:31:20.296 [2024-12-05 10:03:07.847954] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 00:31:20.296 [2024-12-05 10:03:07.912529] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:31:20.296 [2024-12-05 10:03:07.913532] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x61500002ee80:1 started. 00:31:20.296 [2024-12-05 10:03:07.916192] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:31:20.296 [2024-12-05 10:03:07.916230] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:31:20.296 [2024-12-05 10:03:07.918820] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x61500002ee80 was disconnected and freed. delete nvme_qpair. 00:31:23.586 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:31:23.586 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:31:23.586 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.586 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:31:23.586 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.586 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:31:23.586 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.587 [2024-12-05 10:03:10.953790] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:31:23.587 2024/12/05 10:03:10 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:31:23.587 request: 00:31:23.587 { 00:31:23.587 "method": "bdev_nvme_start_mdns_discovery", 00:31:23.587 "params": { 00:31:23.587 "name": "cdc", 00:31:23.587 "svcname": "_nvme-disc._tcp", 00:31:23.587 "hostnqn": "nqn.2021-12.io.spdk:test" 00:31:23.587 } 00:31:23.587 } 00:31:23.587 Got JSON-RPC error response 00:31:23.587 GoRPCClient: error on JSON-RPC call 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:31:23.587 10:03:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:31:23.587 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:31:23.587 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:31:23.587 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:31:23.587 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:31:23.587 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:31:23.587 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:31:23.587 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:23.587 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:31:23.588 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:31:23.588 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:31:23.588 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:31:23.588 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:31:23.588 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:31:23.588 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.588 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.588 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.588 10:03:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:31:23.588 [2024-12-05 10:03:11.145808] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:31:24.524 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 00:31:24.524 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:31:24.524 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:31:24.524 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:31:24.524 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:31:24.524 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:31:24.524 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:31:24.783 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:31:24.783 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:31:24.783 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:31:24.783 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:31:24.783 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:31:24.783 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:24.783 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:31:24.783 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:24.783 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:31:24.783 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:24.783 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:31:24.783 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:31:24.783 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:31:24.783 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:31:24.783 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:31:24.783 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:31:24.783 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.783 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.783 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.783 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:31:24.783 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 105549 00:31:24.783 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 105549 00:31:25.350 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 105579 00:31:25.350 Got SIGTERM, quitting. 00:31:25.350 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:31:25.350 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:25.350 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 10:03:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:31:25.350 00:31:25.350 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:31:25.350 avahi-daemon 0.8 exiting. 00:31:25.610 10:03:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:25.610 10:03:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:31:25.610 10:03:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:25.610 10:03:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:25.610 rmmod nvme_tcp 00:31:25.610 rmmod nvme_fabrics 00:31:25.610 rmmod nvme_keyring 00:31:25.610 10:03:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:25.610 10:03:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:31:25.610 10:03:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:31:25.610 10:03:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@517 -- # '[' -n 105493 ']' 00:31:25.610 10:03:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # killprocess 105493 00:31:25.610 10:03:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # '[' -z 105493 ']' 00:31:25.610 10:03:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # kill -0 105493 00:31:25.610 10:03:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # uname 00:31:25.610 10:03:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:25.610 10:03:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105493 00:31:25.610 10:03:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:25.610 killing process with pid 105493 00:31:25.610 10:03:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:25.610 10:03:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105493' 00:31:25.610 10:03:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@973 -- # kill 105493 00:31:25.610 10:03:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@978 -- # wait 105493 00:31:26.548 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:26.548 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:26.548 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:26.548 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 00:31:26.548 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-save 00:31:26.548 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:26.548 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:31:26.548 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:26.548 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:26.548 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:26.548 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:26.548 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:26.548 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:26.548 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:26.548 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 00:31:26.807 00:31:26.807 real 0m24.746s 00:31:26.807 user 0m46.538s 00:31:26.807 sys 0m2.428s 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:26.807 ************************************ 00:31:26.807 END TEST nvmf_mdns_discovery 00:31:26.807 ************************************ 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.807 ************************************ 00:31:26.807 START TEST nvmf_host_multipath 00:31:26.807 ************************************ 00:31:26.807 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:31:27.068 * Looking for test storage... 00:31:27.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:27.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.068 --rc genhtml_branch_coverage=1 00:31:27.068 --rc genhtml_function_coverage=1 00:31:27.068 --rc genhtml_legend=1 00:31:27.068 --rc geninfo_all_blocks=1 00:31:27.068 --rc geninfo_unexecuted_blocks=1 00:31:27.068 00:31:27.068 ' 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:27.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.068 --rc genhtml_branch_coverage=1 00:31:27.068 --rc genhtml_function_coverage=1 00:31:27.068 --rc genhtml_legend=1 00:31:27.068 --rc geninfo_all_blocks=1 00:31:27.068 --rc geninfo_unexecuted_blocks=1 00:31:27.068 00:31:27.068 ' 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:27.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.068 --rc genhtml_branch_coverage=1 00:31:27.068 --rc genhtml_function_coverage=1 00:31:27.068 --rc genhtml_legend=1 00:31:27.068 --rc geninfo_all_blocks=1 00:31:27.068 --rc geninfo_unexecuted_blocks=1 00:31:27.068 00:31:27.068 ' 00:31:27.068 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:27.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.068 --rc genhtml_branch_coverage=1 00:31:27.068 --rc genhtml_function_coverage=1 00:31:27.068 --rc genhtml_legend=1 00:31:27.069 --rc geninfo_all_blocks=1 00:31:27.069 --rc geninfo_unexecuted_blocks=1 00:31:27.069 00:31:27.069 ' 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:27.069 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:27.069 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:27.070 Cannot find device "nvmf_init_br" 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:27.070 Cannot find device "nvmf_init_br2" 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:27.070 Cannot find device "nvmf_tgt_br" 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:27.070 Cannot find device "nvmf_tgt_br2" 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:31:27.070 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:27.330 Cannot find device "nvmf_init_br" 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:27.330 Cannot find device "nvmf_init_br2" 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:27.330 Cannot find device "nvmf_tgt_br" 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:27.330 Cannot find device "nvmf_tgt_br2" 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:27.330 Cannot find device "nvmf_br" 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:27.330 Cannot find device "nvmf_init_if" 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:27.330 Cannot find device "nvmf_init_if2" 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:27.330 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:27.330 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:27.330 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:27.590 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:27.590 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:27.590 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:27.590 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:27.590 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:27.590 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:27.590 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:27.590 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:27.590 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:31:27.590 00:31:27.590 --- 10.0.0.3 ping statistics --- 00:31:27.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.590 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:31:27.590 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:27.590 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:27.590 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:31:27.590 00:31:27.590 --- 10.0.0.4 ping statistics --- 00:31:27.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.590 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:31:27.590 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:27.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:27.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:31:27.590 00:31:27.590 --- 10.0.0.1 ping statistics --- 00:31:27.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.590 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:31:27.590 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:27.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:27.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:31:27.590 00:31:27.590 --- 10.0.0.2 ping statistics --- 00:31:27.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.590 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=106227 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 106227 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 106227 ']' 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:27.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:27.590 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:27.590 [2024-12-05 10:03:15.170280] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:31:27.590 [2024-12-05 10:03:15.170489] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:27.847 [2024-12-05 10:03:15.368363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:28.105 [2024-12-05 10:03:15.532945] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:28.105 [2024-12-05 10:03:15.533014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:28.105 [2024-12-05 10:03:15.533059] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:28.105 [2024-12-05 10:03:15.533094] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:28.105 [2024-12-05 10:03:15.533114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:28.105 [2024-12-05 10:03:15.535984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.105 [2024-12-05 10:03:15.535999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.672 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:28.672 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:31:28.672 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:28.672 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:28.672 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:28.672 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:28.672 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=106227 00:31:28.672 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:28.930 [2024-12-05 10:03:16.380392] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.930 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:29.189 Malloc0 00:31:29.189 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:29.447 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:29.706 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:29.966 [2024-12-05 10:03:17.506380] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:29.966 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:31:30.225 [2024-12-05 10:03:17.718305] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:31:30.225 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=106331 00:31:30.225 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:30.225 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:30.225 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 106331 /var/tmp/bdevperf.sock 00:31:30.225 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 106331 ']' 00:31:30.225 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:30.225 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:30.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:30.225 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:30.225 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:30.225 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:31.161 10:03:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:31.161 10:03:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:31:31.161 10:03:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:31.420 10:03:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:31.678 Nvme0n1 00:31:31.937 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:32.196 Nvme0n1 00:31:32.196 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:32.196 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:31:33.130 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:31:33.130 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:31:33.388 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:31:33.645 10:03:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:31:33.645 10:03:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=106418 00:31:33.645 10:03:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106227 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:31:33.645 10:03:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:31:40.210 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:31:40.210 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:31:40.210 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:31:40.210 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:40.210 Attaching 4 probes... 00:31:40.210 @path[10.0.0.3, 4421]: 15931 00:31:40.210 @path[10.0.0.3, 4421]: 15546 00:31:40.210 @path[10.0.0.3, 4421]: 15306 00:31:40.210 @path[10.0.0.3, 4421]: 15609 00:31:40.210 @path[10.0.0.3, 4421]: 15693 00:31:40.210 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:31:40.210 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:31:40.210 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:31:40.210 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:31:40.210 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:31:40.210 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:31:40.210 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 106418 00:31:40.210 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:40.210 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:31:40.210 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:31:40.469 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:31:40.469 10:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:31:40.469 10:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=106550 00:31:40.469 10:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106227 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:31:40.469 10:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:31:47.131 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:31:47.131 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:31:47.131 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:31:47.131 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:47.131 Attaching 4 probes... 00:31:47.131 @path[10.0.0.3, 4420]: 15449 00:31:47.131 @path[10.0.0.3, 4420]: 16037 00:31:47.131 @path[10.0.0.3, 4420]: 16010 00:31:47.131 @path[10.0.0.3, 4420]: 16292 00:31:47.131 @path[10.0.0.3, 4420]: 16802 00:31:47.131 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:31:47.131 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:31:47.131 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:31:47.131 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:31:47.131 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:31:47.131 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:31:47.131 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 106550 00:31:47.131 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:47.131 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:31:47.131 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:31:47.131 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:31:47.390 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:31:47.390 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106227 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:31:47.390 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=106681 00:31:47.390 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:31:53.953 10:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:31:53.953 10:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:31:53.953 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:31:53.953 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:53.953 Attaching 4 probes... 00:31:53.953 @path[10.0.0.3, 4421]: 12428 00:31:53.953 @path[10.0.0.3, 4421]: 15686 00:31:53.953 @path[10.0.0.3, 4421]: 15746 00:31:53.953 @path[10.0.0.3, 4421]: 15906 00:31:53.953 @path[10.0.0.3, 4421]: 15553 00:31:53.953 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:31:53.953 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:31:53.953 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:31:53.953 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:31:53.953 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:31:53.953 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:31:53.953 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 106681 00:31:53.953 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:53.953 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:31:53.953 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:31:53.953 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:31:54.212 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:31:54.212 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106227 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:31:54.212 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=106806 00:31:54.212 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:32:00.778 10:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:32:00.778 10:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:32:00.778 10:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:32:00.778 10:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:00.778 Attaching 4 probes... 00:32:00.778 00:32:00.778 00:32:00.778 00:32:00.778 00:32:00.779 00:32:00.779 10:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:32:00.779 10:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:32:00.779 10:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:32:00.779 10:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:32:00.779 10:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:32:00.779 10:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:32:00.779 10:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 106806 00:32:00.779 10:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:00.779 10:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:32:00.779 10:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:32:00.779 10:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:32:01.038 10:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:32:01.038 10:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=106951 00:32:01.038 10:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:32:01.038 10:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106227 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:32:07.603 10:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:32:07.603 10:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:32:07.603 10:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:32:07.603 10:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:07.603 Attaching 4 probes... 00:32:07.603 @path[10.0.0.3, 4421]: 15347 00:32:07.603 @path[10.0.0.3, 4421]: 15519 00:32:07.603 @path[10.0.0.3, 4421]: 15351 00:32:07.603 @path[10.0.0.3, 4421]: 15683 00:32:07.603 @path[10.0.0.3, 4421]: 15876 00:32:07.603 10:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:32:07.603 10:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:32:07.603 10:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:32:07.603 10:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:32:07.603 10:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:32:07.603 10:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:32:07.603 10:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 106951 00:32:07.603 10:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:07.603 10:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:32:07.603 [2024-12-05 10:03:55.090170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.603 [2024-12-05 10:03:55.090237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.603 [2024-12-05 10:03:55.090257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.603 [2024-12-05 10:03:55.090269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.603 [2024-12-05 10:03:55.090280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.603 [2024-12-05 10:03:55.090303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.603 [2024-12-05 10:03:55.090325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.603 [2024-12-05 10:03:55.090361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.603 [2024-12-05 10:03:55.090372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.603 [2024-12-05 10:03:55.090383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.603 [2024-12-05 10:03:55.090394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.603 [2024-12-05 10:03:55.090404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.603 [2024-12-05 10:03:55.090414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.603 [2024-12-05 10:03:55.090424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.603 [2024-12-05 10:03:55.090435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.603 [2024-12-05 10:03:55.090446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.603 [2024-12-05 10:03:55.090456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.603 [2024-12-05 10:03:55.090467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.090985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 [2024-12-05 10:03:55.091858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:32:07.604 10:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:32:08.539 10:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:32:08.539 10:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=107084 00:32:08.539 10:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106227 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:32:08.539 10:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:32:15.106 10:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:32:15.106 10:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:32:15.106 10:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:32:15.106 10:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:15.106 Attaching 4 probes... 00:32:15.106 @path[10.0.0.3, 4420]: 15628 00:32:15.106 @path[10.0.0.3, 4420]: 14940 00:32:15.106 @path[10.0.0.3, 4420]: 15873 00:32:15.106 @path[10.0.0.3, 4420]: 15448 00:32:15.106 @path[10.0.0.3, 4420]: 14865 00:32:15.106 10:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:32:15.106 10:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:32:15.106 10:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:32:15.106 10:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:32:15.106 10:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:32:15.106 10:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:32:15.106 10:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 107084 00:32:15.106 10:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:15.106 10:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:32:15.365 [2024-12-05 10:04:02.749401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:32:15.365 10:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:32:15.624 10:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:32:22.188 10:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:32:22.188 10:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=107277 00:32:22.188 10:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:32:22.188 10:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106227 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:28.762 Attaching 4 probes... 00:32:28.762 @path[10.0.0.3, 4421]: 14695 00:32:28.762 @path[10.0.0.3, 4421]: 15676 00:32:28.762 @path[10.0.0.3, 4421]: 15679 00:32:28.762 @path[10.0.0.3, 4421]: 15110 00:32:28.762 @path[10.0.0.3, 4421]: 14571 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 107277 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 106331 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 106331 ']' 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 106331 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106331 00:32:28.762 killing process with pid 106331 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106331' 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 106331 00:32:28.762 10:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 106331 00:32:28.762 { 00:32:28.762 "results": [ 00:32:28.762 { 00:32:28.762 "job": "Nvme0n1", 00:32:28.762 "core_mask": "0x4", 00:32:28.762 "workload": "verify", 00:32:28.762 "status": "terminated", 00:32:28.762 "verify_range": { 00:32:28.762 "start": 0, 00:32:28.762 "length": 16384 00:32:28.762 }, 00:32:28.762 "queue_depth": 128, 00:32:28.762 "io_size": 4096, 00:32:28.762 "runtime": 55.709377, 00:32:28.762 "iops": 6656.024891464861, 00:32:28.762 "mibps": 26.000097232284613, 00:32:28.762 "io_failed": 0, 00:32:28.762 "io_timeout": 0, 00:32:28.763 "avg_latency_us": 19206.95146471552, 00:32:28.763 "min_latency_us": 431.9418181818182, 00:32:28.763 "max_latency_us": 7046430.72 00:32:28.763 } 00:32:28.763 ], 00:32:28.763 "core_count": 1 00:32:28.763 } 00:32:28.763 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 106331 00:32:28.763 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:32:28.763 [2024-12-05 10:03:17.823278] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:32:28.763 [2024-12-05 10:03:17.823441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106331 ] 00:32:28.763 [2024-12-05 10:03:17.998995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.763 [2024-12-05 10:03:18.150244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:28.763 Running I/O for 90 seconds... 00:32:28.763 8776.00 IOPS, 34.28 MiB/s [2024-12-05T10:04:16.392Z] 8556.50 IOPS, 33.42 MiB/s [2024-12-05T10:04:16.392Z] 8470.33 IOPS, 33.09 MiB/s [2024-12-05T10:04:16.392Z] 8305.00 IOPS, 32.44 MiB/s [2024-12-05T10:04:16.392Z] 8177.60 IOPS, 31.94 MiB/s [2024-12-05T10:04:16.392Z] 8107.17 IOPS, 31.67 MiB/s [2024-12-05T10:04:16.392Z] 8062.43 IOPS, 31.49 MiB/s [2024-12-05T10:04:16.392Z] 8022.25 IOPS, 31.34 MiB/s [2024-12-05T10:04:16.392Z] [2024-12-05 10:03:28.050438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.763 [2024-12-05 10:03:28.050519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.050593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.763 [2024-12-05 10:03:28.050648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.050679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.763 [2024-12-05 10:03:28.050702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.050729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.763 [2024-12-05 10:03:28.050752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.050781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.763 [2024-12-05 10:03:28.050804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.054257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.054313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.054411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.054442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.054474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.054497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.054527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.054549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.054578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.054616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.054677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.054700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.054728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.054749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.054777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.054798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.054825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.054846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.054873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.054894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.054921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.054942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.054970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.055010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.055040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.055062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.055090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.055112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.055139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.055161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.055189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.055210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.055238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.055259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.055297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.055320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.055347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.055368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.055403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.055427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.055455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.055476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.055503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.055525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.055553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.055574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.055601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.055622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.055649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.055673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.055701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.055723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.055750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.055772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.055801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.763 [2024-12-05 10:03:28.055822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:28.763 [2024-12-05 10:03:28.055850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.055872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.055899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.055939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.055974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.055996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.056025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.056047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.056617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.056649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.056685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.056709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.056738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.056760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.056789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.056810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.056839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.056861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.056889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.056958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.056993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.057017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.057046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.057068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.057096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.057118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.057146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.057179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.057211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.057234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.057263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.057285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.057313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.057349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.057378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.057399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.057427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.057448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.057476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.057497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.057525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.057546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.057586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.057606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.057634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.057667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.057695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.057716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.057744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.057766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.057794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.057816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.057853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.057876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.057904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.057925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.057953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.058003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.058033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.058055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.058084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.058105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.058131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.058151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.058178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.058199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.058225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.058246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.058273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.058294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.058346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.058371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.058399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.058421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:28.764 [2024-12-05 10:03:28.058450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.764 [2024-12-05 10:03:28.058473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:28.764 7964.22 IOPS, 31.11 MiB/s [2024-12-05T10:04:16.393Z] 7964.00 IOPS, 31.11 MiB/s [2024-12-05T10:04:16.393Z] 7969.45 IOPS, 31.13 MiB/s [2024-12-05T10:04:16.393Z] 7972.33 IOPS, 31.14 MiB/s [2024-12-05T10:04:16.393Z] 7988.77 IOPS, 31.21 MiB/s [2024-12-05T10:04:16.393Z] 8025.64 IOPS, 31.35 MiB/s [2024-12-05T10:04:16.393Z] [2024-12-05 10:03:34.679861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.765 [2024-12-05 10:03:34.679925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.680066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.765 [2024-12-05 10:03:34.680095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.680127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.765 [2024-12-05 10:03:34.680151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.680180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.765 [2024-12-05 10:03:34.680204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.680232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.765 [2024-12-05 10:03:34.680255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.680288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.765 [2024-12-05 10:03:34.680336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.680370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.765 [2024-12-05 10:03:34.680391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.680419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.765 [2024-12-05 10:03:34.680442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.681429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.765 [2024-12-05 10:03:34.681461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.681497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.765 [2024-12-05 10:03:34.681522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.681552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.765 [2024-12-05 10:03:34.681573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.681600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.765 [2024-12-05 10:03:34.681623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.681673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.765 [2024-12-05 10:03:34.681696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.681723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.765 [2024-12-05 10:03:34.681745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.681771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.765 [2024-12-05 10:03:34.681792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.681822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.765 [2024-12-05 10:03:34.681843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.681870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.681891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.681918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.681939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.682012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.682035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.682063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.682085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.682114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.682135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.682164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.682186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.682213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.682235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.682267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.682302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.682393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.682418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.682446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.682469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.682498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.682520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.682549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.682571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.682614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.682652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.682679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.682700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.682726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.682747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.682774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.682795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.682821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.682842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.682869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.682890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.682916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.682937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.682979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.683033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.683066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.683099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.683130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.683152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.683180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.765 [2024-12-05 10:03:34.683202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:28.765 [2024-12-05 10:03:34.683246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.766 [2024-12-05 10:03:34.683270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.683313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.766 [2024-12-05 10:03:34.683336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.683365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.766 [2024-12-05 10:03:34.683387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.683501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.766 [2024-12-05 10:03:34.683527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.683561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.766 [2024-12-05 10:03:34.683583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.683613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.766 [2024-12-05 10:03:34.683634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.683671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.766 [2024-12-05 10:03:34.683692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.683721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.766 [2024-12-05 10:03:34.683742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.683770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.766 [2024-12-05 10:03:34.683791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.683821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.766 [2024-12-05 10:03:34.683850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.683880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.766 [2024-12-05 10:03:34.683902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.683930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.766 [2024-12-05 10:03:34.683972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.684005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.766 [2024-12-05 10:03:34.684027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.684055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.766 [2024-12-05 10:03:34.684078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.684108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.766 [2024-12-05 10:03:34.684128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.684158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.766 [2024-12-05 10:03:34.684179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.684315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.684345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.684381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.684404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.684434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.684455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.684484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.684505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.684534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.684556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.684585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.684606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.684648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.684670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.684699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.684721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.684751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.684772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.684801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.684822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.684851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.684872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.684901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.684921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.684975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.684999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.685029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.685063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.685094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.685115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.685146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.685167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.685196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.685218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.685246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.685267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.685307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.685329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.685358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.685379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.685408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.685429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.685458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.685479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.685507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.766 [2024-12-05 10:03:34.685528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:28.766 [2024-12-05 10:03:34.685557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.767 [2024-12-05 10:03:34.685577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.685606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.767 [2024-12-05 10:03:34.685628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.685656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.685677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.685707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.685728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.685756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.685777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.685807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.685828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.685857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.685877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.685905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.685948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.685984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.686006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.686036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.686057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.686086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.686107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.686136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.686157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.686185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.686207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.686236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.686257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.686289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.686310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.686350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.686372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.686401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.686422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.686450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.686471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.686500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.686521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.686550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.686579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.686610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.686632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.686661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.686696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.686726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.686748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.686778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.686799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.686829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.686850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.686879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.686900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.686957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.686982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.687013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.687035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.687064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.687085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.687114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.687135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.687164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.687185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.687214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.687243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:28.767 [2024-12-05 10:03:34.687282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.767 [2024-12-05 10:03:34.687304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:34.687333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.768 [2024-12-05 10:03:34.687354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:34.687382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.768 [2024-12-05 10:03:34.687404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:34.687432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.768 [2024-12-05 10:03:34.687453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:34.687481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.768 [2024-12-05 10:03:34.687502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:34.687530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.768 [2024-12-05 10:03:34.687552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:34.687581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.768 [2024-12-05 10:03:34.687602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:34.687630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.768 [2024-12-05 10:03:34.687652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:34.687682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.768 [2024-12-05 10:03:34.687703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:34.687732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.768 [2024-12-05 10:03:34.687753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:34.687782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.768 [2024-12-05 10:03:34.687802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:34.687831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.768 [2024-12-05 10:03:34.687852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:34.687887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.768 [2024-12-05 10:03:34.687909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:34.687951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.768 [2024-12-05 10:03:34.687976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:34.688006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.768 [2024-12-05 10:03:34.688027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:34.688056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.768 [2024-12-05 10:03:34.688078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:34.688108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.768 [2024-12-05 10:03:34.688129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:34.688158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.768 [2024-12-05 10:03:34.688180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:28.768 7991.40 IOPS, 31.22 MiB/s [2024-12-05T10:04:16.397Z] 7532.44 IOPS, 29.42 MiB/s [2024-12-05T10:04:16.397Z] 7555.53 IOPS, 29.51 MiB/s [2024-12-05T10:04:16.397Z] 7578.28 IOPS, 29.60 MiB/s [2024-12-05T10:04:16.397Z] 7595.58 IOPS, 29.67 MiB/s [2024-12-05T10:04:16.397Z] 7610.05 IOPS, 29.73 MiB/s [2024-12-05T10:04:16.397Z] 7611.81 IOPS, 29.73 MiB/s [2024-12-05T10:04:16.397Z] [2024-12-05 10:03:41.731504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.731579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.731666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.731695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.731727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.731750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.731780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.731803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.731830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.731852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.731879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.731929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.731993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.732019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.732048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.732070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.732099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.732123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.732151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.732174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.732202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.732231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.732260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.732282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.732326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.732347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.732375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.732397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.732424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.732446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.732474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.732495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.732523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.732545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.732572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.732594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.732633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.732655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.732683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.732705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:28.768 [2024-12-05 10:03:41.732732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.768 [2024-12-05 10:03:41.732754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.732782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.732803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.732830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.732852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.732879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.732900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.732927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.732992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.733024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.733047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.733075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.733098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.733126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.733149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.733178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.733200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.733228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.733251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.733304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.733352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.733380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.733402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.733430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.733451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.733479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.733500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.733527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.733549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.733576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.733618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.733647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.733669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.733697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.733718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.733747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.733768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.733795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.733817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.733846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.769 [2024-12-05 10:03:41.733868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.733896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.769 [2024-12-05 10:03:41.733917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.733992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.769 [2024-12-05 10:03:41.734027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.734058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.769 [2024-12-05 10:03:41.734081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.734110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.769 [2024-12-05 10:03:41.734132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.734160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.769 [2024-12-05 10:03:41.734183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.734212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.769 [2024-12-05 10:03:41.734234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.734263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.769 [2024-12-05 10:03:41.734285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.734338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.769 [2024-12-05 10:03:41.734386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.734415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.769 [2024-12-05 10:03:41.734438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.734466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.769 [2024-12-05 10:03:41.734488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.734517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.769 [2024-12-05 10:03:41.734539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.734568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.769 [2024-12-05 10:03:41.734591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.734625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.769 [2024-12-05 10:03:41.734647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.734675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.769 [2024-12-05 10:03:41.734705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.734735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.769 [2024-12-05 10:03:41.734757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.734810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.769 [2024-12-05 10:03:41.734833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.734860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.769 [2024-12-05 10:03:41.734881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.734910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.769 [2024-12-05 10:03:41.734932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:28.769 [2024-12-05 10:03:41.734960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.769 [2024-12-05 10:03:41.734995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.735025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.735047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.735075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.735098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.735125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.735147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.735176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.735198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.735501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.735533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.735571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.735595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.735627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.735660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.735694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.735717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.735747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.735769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.735799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.735820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.735851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.735872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.735902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.735924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.735980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.736003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.736034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.736056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.736086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.736108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.736138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.736160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.736191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.736213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.736243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.736265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.736296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.736318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.736358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.736381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.736412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.736434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.736464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.736485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.736515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.736537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.736567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.736589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.736619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.736640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.736670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.736692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.736722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.736744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.736774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.736796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.736827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.736849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.737833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.737862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.737896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.737919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.737979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.738004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.738036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.738058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.738090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.738112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.738145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.738166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.738199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.738222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:28.770 [2024-12-05 10:03:41.738369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.770 [2024-12-05 10:03:41.738405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:28.770 7569.23 IOPS, 29.57 MiB/s [2024-12-05T10:04:16.399Z] 7240.13 IOPS, 28.28 MiB/s [2024-12-05T10:04:16.399Z] 6938.46 IOPS, 27.10 MiB/s [2024-12-05T10:04:16.399Z] 6660.92 IOPS, 26.02 MiB/s [2024-12-05T10:04:16.399Z] 6404.73 IOPS, 25.02 MiB/s [2024-12-05T10:04:16.399Z] 6167.52 IOPS, 24.09 MiB/s [2024-12-05T10:04:16.399Z] 5947.25 IOPS, 23.23 MiB/s [2024-12-05T10:04:16.399Z] 5747.10 IOPS, 22.45 MiB/s [2024-12-05T10:04:16.399Z] 5810.43 IOPS, 22.70 MiB/s [2024-12-05T10:04:16.399Z] 5871.97 IOPS, 22.94 MiB/s [2024-12-05T10:04:16.399Z] 5927.72 IOPS, 23.16 MiB/s [2024-12-05T10:04:16.399Z] 5986.61 IOPS, 23.39 MiB/s [2024-12-05T10:04:16.400Z] 6045.68 IOPS, 23.62 MiB/s [2024-12-05T10:04:16.400Z] 6091.37 IOPS, 23.79 MiB/s [2024-12-05T10:04:16.400Z] [2024-12-05 10:03:55.091739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.091794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.091876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.771 [2024-12-05 10:03:55.091904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.091936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.771 [2024-12-05 10:03:55.092032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.092064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.771 [2024-12-05 10:03:55.092087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.092115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.771 [2024-12-05 10:03:55.092137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.092205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.771 [2024-12-05 10:03:55.092229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.092273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.771 [2024-12-05 10:03:55.092293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.092320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.771 [2024-12-05 10:03:55.092350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.093830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.093865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.093903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.093924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.093976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.093999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.094979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.771 [2024-12-05 10:03:55.094996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.771 [2024-12-05 10:03:55.095029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.772 [2024-12-05 10:03:55.095048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.772 [2024-12-05 10:03:55.095086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.772 [2024-12-05 10:03:55.095122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.772 [2024-12-05 10:03:55.095158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.772 [2024-12-05 10:03:55.095195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.772 [2024-12-05 10:03:55.095231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.095290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.095334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.095376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.095417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.095451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.095486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.095521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.095559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.095595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.095629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.095664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.095698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.095732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.095766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.095801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.095842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.095879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.095912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.095974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.095996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.096014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.096032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.096050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.096068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.096086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.096105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.096122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.096150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.096172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.096192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.096209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.096228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.772 [2024-12-05 10:03:55.096262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.096280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.772 [2024-12-05 10:03:55.096345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.096365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.772 [2024-12-05 10:03:55.096383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.096417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.772 [2024-12-05 10:03:55.096436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.096455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.772 [2024-12-05 10:03:55.096472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.096506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.772 [2024-12-05 10:03:55.096523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.096541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.772 [2024-12-05 10:03:55.096559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.096577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.096595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.096612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.772 [2024-12-05 10:03:55.096629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.772 [2024-12-05 10:03:55.096647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.096663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.096681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.096698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.096716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.096733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.096751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.096768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.096786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.096803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.096821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.096841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.096861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.096890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.096911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.096959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.096983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.097970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.097989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.098008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.098026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.098044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.098062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.098080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:28.773 [2024-12-05 10:03:55.098098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.098117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.773 [2024-12-05 10:03:55.098140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.098160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.773 [2024-12-05 10:03:55.098177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.098196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.773 [2024-12-05 10:03:55.098213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.773 [2024-12-05 10:03:55.098232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.773 [2024-12-05 10:03:55.098250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.774 [2024-12-05 10:03:55.098268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.774 [2024-12-05 10:03:55.098288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.774 [2024-12-05 10:03:55.098324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.774 [2024-12-05 10:03:55.098372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.774 [2024-12-05 10:03:55.098392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.774 [2024-12-05 10:03:55.098410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.774 [2024-12-05 10:03:55.098429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.774 [2024-12-05 10:03:55.098460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.774 [2024-12-05 10:03:55.098489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.774 [2024-12-05 10:03:55.098508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.774 [2024-12-05 10:03:55.098528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.774 [2024-12-05 10:03:55.098545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.774 [2024-12-05 10:03:55.098565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.774 [2024-12-05 10:03:55.098582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.774 [2024-12-05 10:03:55.098601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.774 [2024-12-05 10:03:55.098631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.774 [2024-12-05 10:03:55.098650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.774 [2024-12-05 10:03:55.098668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.774 [2024-12-05 10:03:55.098686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.774 [2024-12-05 10:03:55.098704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.774 [2024-12-05 10:03:55.098723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.774 [2024-12-05 10:03:55.098740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.774 [2024-12-05 10:03:55.098774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.774 [2024-12-05 10:03:55.098792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.774 [2024-12-05 10:03:55.098809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.774 [2024-12-05 10:03:55.098827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.774 [2024-12-05 10:03:55.098844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ba00 is same with the state(6) to be set 00:32:28.774 [2024-12-05 10:03:55.100788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.774 [2024-12-05 10:03:55.100902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.774 [2024-12-05 10:03:55.100933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.774 [2024-12-05 10:03:55.101006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:32:28.774 [2024-12-05 10:03:55.101296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.774 [2024-12-05 10:03:55.101350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4421 00:32:28.774 [2024-12-05 10:03:55.101375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:32:28.774 [2024-12-05 10:03:55.101409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:32:28.774 [2024-12-05 10:03:55.101443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.774 [2024-12-05 10:03:55.101464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.774 [2024-12-05 10:03:55.101483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.774 [2024-12-05 10:03:55.101503] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.774 [2024-12-05 10:03:55.101521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.774 6136.64 IOPS, 23.97 MiB/s [2024-12-05T10:04:16.403Z] 6180.24 IOPS, 24.14 MiB/s [2024-12-05T10:04:16.403Z] 6227.26 IOPS, 24.33 MiB/s [2024-12-05T10:04:16.403Z] 6259.51 IOPS, 24.45 MiB/s [2024-12-05T10:04:16.403Z] 6299.40 IOPS, 24.61 MiB/s [2024-12-05T10:04:16.403Z] 6337.24 IOPS, 24.75 MiB/s [2024-12-05T10:04:16.403Z] 6362.14 IOPS, 24.85 MiB/s [2024-12-05T10:04:16.403Z] 6392.12 IOPS, 24.97 MiB/s [2024-12-05T10:04:16.403Z] 6415.20 IOPS, 25.06 MiB/s [2024-12-05T10:04:16.403Z] 6450.02 IOPS, 25.20 MiB/s [2024-12-05T10:04:16.403Z] [2024-12-05 10:04:05.180807] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:28.774 6478.28 IOPS, 25.31 MiB/s [2024-12-05T10:04:16.403Z] 6501.32 IOPS, 25.40 MiB/s [2024-12-05T10:04:16.403Z] 6524.77 IOPS, 25.49 MiB/s [2024-12-05T10:04:16.403Z] 6543.69 IOPS, 25.56 MiB/s [2024-12-05T10:04:16.403Z] 6548.88 IOPS, 25.58 MiB/s [2024-12-05T10:04:16.403Z] 6566.02 IOPS, 25.65 MiB/s [2024-12-05T10:04:16.403Z] 6589.90 IOPS, 25.74 MiB/s [2024-12-05T10:04:16.403Z] 6614.68 IOPS, 25.84 MiB/s [2024-12-05T10:04:16.403Z] 6630.48 IOPS, 25.90 MiB/s [2024-12-05T10:04:16.403Z] 6644.13 IOPS, 25.95 MiB/s [2024-12-05T10:04:16.403Z] Received shutdown signal, test time was about 55.710098 seconds 00:32:28.774 00:32:28.774 Latency(us) 00:32:28.774 [2024-12-05T10:04:16.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.774 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:28.774 Verification LBA range: start 0x0 length 0x4000 00:32:28.774 Nvme0n1 : 55.71 6656.02 26.00 0.00 0.00 19206.95 431.94 7046430.72 00:32:28.774 [2024-12-05T10:04:16.403Z] =================================================================================================================== 00:32:28.774 [2024-12-05T10:04:16.403Z] Total : 6656.02 26.00 0.00 0.00 19206.95 431.94 7046430.72 00:32:28.774 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:29.032 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:32:29.032 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:32:29.032 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:32:29.032 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:29.032 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:32:29.291 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:29.291 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:32:29.291 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:29.291 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:29.291 rmmod nvme_tcp 00:32:29.291 rmmod nvme_fabrics 00:32:29.291 rmmod nvme_keyring 00:32:29.291 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:29.291 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:32:29.291 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:32:29.291 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 106227 ']' 00:32:29.291 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 106227 00:32:29.291 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 106227 ']' 00:32:29.291 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 106227 00:32:29.291 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:32:29.291 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:29.291 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106227 00:32:29.291 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:29.291 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:29.291 killing process with pid 106227 00:32:29.291 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106227' 00:32:29.291 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 106227 00:32:29.291 10:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 106227 00:32:30.228 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:30.228 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:30.228 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:30.228 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:32:30.228 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:30.228 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:30.228 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:30.228 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:30.228 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:30.228 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:30.228 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:30.228 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:30.228 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:30.487 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:30.487 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:30.487 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:30.487 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:30.487 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:30.487 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:30.487 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:30.487 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:30.487 10:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:30.487 10:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:30.487 10:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.487 10:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:30.487 10:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.487 10:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:32:30.487 00:32:30.487 real 1m3.604s 00:32:30.487 user 2m59.833s 00:32:30.487 sys 0m12.770s 00:32:30.487 ************************************ 00:32:30.487 END TEST nvmf_host_multipath 00:32:30.487 ************************************ 00:32:30.487 10:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:30.487 10:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:30.487 10:04:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:32:30.487 10:04:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:30.487 10:04:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:30.487 10:04:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.487 ************************************ 00:32:30.487 START TEST nvmf_timeout 00:32:30.487 ************************************ 00:32:30.487 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:32:30.747 * Looking for test storage... 00:32:30.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:30.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.747 --rc genhtml_branch_coverage=1 00:32:30.747 --rc genhtml_function_coverage=1 00:32:30.747 --rc genhtml_legend=1 00:32:30.747 --rc geninfo_all_blocks=1 00:32:30.747 --rc geninfo_unexecuted_blocks=1 00:32:30.747 00:32:30.747 ' 00:32:30.747 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:30.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.747 --rc genhtml_branch_coverage=1 00:32:30.747 --rc genhtml_function_coverage=1 00:32:30.747 --rc genhtml_legend=1 00:32:30.747 --rc geninfo_all_blocks=1 00:32:30.747 --rc geninfo_unexecuted_blocks=1 00:32:30.747 00:32:30.747 ' 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:30.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.748 --rc genhtml_branch_coverage=1 00:32:30.748 --rc genhtml_function_coverage=1 00:32:30.748 --rc genhtml_legend=1 00:32:30.748 --rc geninfo_all_blocks=1 00:32:30.748 --rc geninfo_unexecuted_blocks=1 00:32:30.748 00:32:30.748 ' 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:30.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.748 --rc genhtml_branch_coverage=1 00:32:30.748 --rc genhtml_function_coverage=1 00:32:30.748 --rc genhtml_legend=1 00:32:30.748 --rc geninfo_all_blocks=1 00:32:30.748 --rc geninfo_unexecuted_blocks=1 00:32:30.748 00:32:30.748 ' 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:30.748 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:30.748 Cannot find device "nvmf_init_br" 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:30.748 Cannot find device "nvmf_init_br2" 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:30.748 Cannot find device "nvmf_tgt_br" 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:30.748 Cannot find device "nvmf_tgt_br2" 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:30.748 Cannot find device "nvmf_init_br" 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:30.748 Cannot find device "nvmf_init_br2" 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:32:30.748 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:30.748 Cannot find device "nvmf_tgt_br" 00:32:30.749 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:32:30.749 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:31.008 Cannot find device "nvmf_tgt_br2" 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:31.008 Cannot find device "nvmf_br" 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:31.008 Cannot find device "nvmf_init_if" 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:31.008 Cannot find device "nvmf_init_if2" 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:31.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:31.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:31.008 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:31.267 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:31.267 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:31.267 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:31.267 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:31.267 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:31.267 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:31.267 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:31.267 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:31.267 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:32:31.267 00:32:31.267 --- 10.0.0.3 ping statistics --- 00:32:31.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.267 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:32:31.267 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:31.267 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:31.267 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.089 ms 00:32:31.267 00:32:31.267 --- 10.0.0.4 ping statistics --- 00:32:31.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.267 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:32:31.267 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:31.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:31.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:32:31.267 00:32:31.267 --- 10.0.0.1 ping statistics --- 00:32:31.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.267 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:32:31.267 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:31.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:31.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:32:31.267 00:32:31.267 --- 10.0.0.2 ping statistics --- 00:32:31.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.267 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:32:31.267 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:31.267 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:32:31.267 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:31.267 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:31.268 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:31.268 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:31.268 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:31.268 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:31.268 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:31.268 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:32:31.268 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:31.268 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:31.268 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:31.268 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=107672 00:32:31.268 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:31.268 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 107672 00:32:31.268 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 107672 ']' 00:32:31.268 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.268 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:31.268 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.268 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:31.268 10:04:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:31.268 [2024-12-05 10:04:18.798302] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:32:31.268 [2024-12-05 10:04:18.798495] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:31.527 [2024-12-05 10:04:18.964336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:31.527 [2024-12-05 10:04:19.063323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:31.527 [2024-12-05 10:04:19.063395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:31.527 [2024-12-05 10:04:19.063432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:31.527 [2024-12-05 10:04:19.063455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:31.527 [2024-12-05 10:04:19.063468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:31.527 [2024-12-05 10:04:19.065513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.527 [2024-12-05 10:04:19.065520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.095 10:04:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:32.095 10:04:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:32:32.095 10:04:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:32.095 10:04:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:32.095 10:04:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:32.095 10:04:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:32.095 10:04:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:32.095 10:04:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:32.355 [2024-12-05 10:04:19.981675] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.614 10:04:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:32.873 Malloc0 00:32:32.873 10:04:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:33.132 10:04:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:33.391 10:04:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:33.650 [2024-12-05 10:04:21.079062] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:33.650 10:04:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=107764 00:32:33.650 10:04:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:32:33.650 10:04:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 107764 /var/tmp/bdevperf.sock 00:32:33.650 10:04:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 107764 ']' 00:32:33.650 10:04:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:33.650 10:04:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:33.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:33.650 10:04:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:33.650 10:04:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:33.650 10:04:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:33.650 [2024-12-05 10:04:21.190222] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:32:33.650 [2024-12-05 10:04:21.190392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107764 ] 00:32:33.908 [2024-12-05 10:04:21.355822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.908 [2024-12-05 10:04:21.468773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:34.475 10:04:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:34.475 10:04:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:32:34.475 10:04:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:34.746 10:04:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:32:35.023 NVMe0n1 00:32:35.023 10:04:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=107810 00:32:35.023 10:04:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:35.023 10:04:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:32:35.285 Running I/O for 10 seconds... 00:32:36.225 10:04:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:36.225 8183.00 IOPS, 31.96 MiB/s [2024-12-05T10:04:23.854Z] [2024-12-05 10:04:23.833392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.225 [2024-12-05 10:04:23.833510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.225 [2024-12-05 10:04:23.833524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.225 [2024-12-05 10:04:23.833533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.225 [2024-12-05 10:04:23.833543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.225 [2024-12-05 10:04:23.833553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.225 [2024-12-05 10:04:23.833562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.225 [2024-12-05 10:04:23.833572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.225 [2024-12-05 10:04:23.833583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.225 [2024-12-05 10:04:23.833593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.225 [2024-12-05 10:04:23.833602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.225 [2024-12-05 10:04:23.833612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.225 [2024-12-05 10:04:23.833620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.225 [2024-12-05 10:04:23.833630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.225 [2024-12-05 10:04:23.833639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.225 [2024-12-05 10:04:23.833648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.225 [2024-12-05 10:04:23.833657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.225 [2024-12-05 10:04:23.833666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.833993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.226 [2024-12-05 10:04:23.834328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.227 [2024-12-05 10:04:23.834337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.227 [2024-12-05 10:04:23.834358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.227 [2024-12-05 10:04:23.834368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.227 [2024-12-05 10:04:23.834377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.227 [2024-12-05 10:04:23.834386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.227 [2024-12-05 10:04:23.834394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.227 [2024-12-05 10:04:23.834403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.227 [2024-12-05 10:04:23.834411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.227 [2024-12-05 10:04:23.834420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.227 [2024-12-05 10:04:23.834428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.227 [2024-12-05 10:04:23.834437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:32:36.227 [2024-12-05 10:04:23.837254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.837956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.837986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.838001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.838014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.838050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.838066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.838082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.838095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.227 [2024-12-05 10:04:23.838111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.227 [2024-12-05 10:04:23.838124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.838979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.838995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.839010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.839025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.839039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.839054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.839068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.839083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.839096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.839112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.839125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.839140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.228 [2024-12-05 10:04:23.839154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.228 [2024-12-05 10:04:23.839169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.229 [2024-12-05 10:04:23.839182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.839198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.229 [2024-12-05 10:04:23.839211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.839226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.229 [2024-12-05 10:04:23.839239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.839254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.229 [2024-12-05 10:04:23.839267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.839312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.229 [2024-12-05 10:04:23.839325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.839339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.229 [2024-12-05 10:04:23.839351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.839371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.229 [2024-12-05 10:04:23.839383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.839398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.229 [2024-12-05 10:04:23.839412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.839450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.229 [2024-12-05 10:04:23.839469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74896 len:8 PRP1 0x0 PRP2 0x0 00:32:36.229 [2024-12-05 10:04:23.839483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.839578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.229 [2024-12-05 10:04:23.839614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.839632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.229 [2024-12-05 10:04:23.839644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.839657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.229 [2024-12-05 10:04:23.839669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.839682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.229 [2024-12-05 10:04:23.839694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.839707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:32:36.229 [2024-12-05 10:04:23.839899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.229 [2024-12-05 10:04:23.839924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.229 [2024-12-05 10:04:23.839986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74904 len:8 PRP1 0x0 PRP2 0x0 00:32:36.229 [2024-12-05 10:04:23.840001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.840028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.229 [2024-12-05 10:04:23.840041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.229 [2024-12-05 10:04:23.840053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74912 len:8 PRP1 0x0 PRP2 0x0 00:32:36.229 [2024-12-05 10:04:23.840066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.840079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.229 [2024-12-05 10:04:23.840089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.229 [2024-12-05 10:04:23.840100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74920 len:8 PRP1 0x0 PRP2 0x0 00:32:36.229 [2024-12-05 10:04:23.840113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.840125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.229 [2024-12-05 10:04:23.840134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.229 [2024-12-05 10:04:23.840146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74928 len:8 PRP1 0x0 PRP2 0x0 00:32:36.229 [2024-12-05 10:04:23.840158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.840172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.229 [2024-12-05 10:04:23.840183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.229 [2024-12-05 10:04:23.840194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74936 len:8 PRP1 0x0 PRP2 0x0 00:32:36.229 [2024-12-05 10:04:23.840207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.840220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.229 [2024-12-05 10:04:23.840231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.229 [2024-12-05 10:04:23.840242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74944 len:8 PRP1 0x0 PRP2 0x0 00:32:36.229 [2024-12-05 10:04:23.840255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.840267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.229 [2024-12-05 10:04:23.840277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.229 [2024-12-05 10:04:23.840288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74952 len:8 PRP1 0x0 PRP2 0x0 00:32:36.229 [2024-12-05 10:04:23.840316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.840328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.229 [2024-12-05 10:04:23.840352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.229 [2024-12-05 10:04:23.840363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74960 len:8 PRP1 0x0 PRP2 0x0 00:32:36.229 [2024-12-05 10:04:23.840374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.840385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.229 [2024-12-05 10:04:23.840395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.229 [2024-12-05 10:04:23.840405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74968 len:8 PRP1 0x0 PRP2 0x0 00:32:36.229 [2024-12-05 10:04:23.840417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.229 [2024-12-05 10:04:23.840429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.229 [2024-12-05 10:04:23.840439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.229 [2024-12-05 10:04:23.840449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74976 len:8 PRP1 0x0 PRP2 0x0 00:32:36.229 [2024-12-05 10:04:23.840461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.230 [2024-12-05 10:04:23.840472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.230 [2024-12-05 10:04:23.840482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.230 [2024-12-05 10:04:23.840493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74984 len:8 PRP1 0x0 PRP2 0x0 00:32:36.230 [2024-12-05 10:04:23.840505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.230 [2024-12-05 10:04:23.840516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.230 [2024-12-05 10:04:23.840526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.230 [2024-12-05 10:04:23.840536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74992 len:8 PRP1 0x0 PRP2 0x0 00:32:36.230 [2024-12-05 10:04:23.840548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.230 [2024-12-05 10:04:23.840559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.230 [2024-12-05 10:04:23.840568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.230 [2024-12-05 10:04:23.840580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75000 len:8 PRP1 0x0 PRP2 0x0 00:32:36.230 [2024-12-05 10:04:23.840592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.230 [2024-12-05 10:04:23.840604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.230 [2024-12-05 10:04:23.840614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.230 [2024-12-05 10:04:23.840625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75008 len:8 PRP1 0x0 PRP2 0x0 00:32:36.230 [2024-12-05 10:04:23.840636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.230 [2024-12-05 10:04:23.840648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.230 [2024-12-05 10:04:23.840657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.230 [2024-12-05 10:04:23.840667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75016 len:8 PRP1 0x0 PRP2 0x0 00:32:36.230 [2024-12-05 10:04:23.840679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.230 [2024-12-05 10:04:23.840690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.230 [2024-12-05 10:04:23.840700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.230 [2024-12-05 10:04:23.840711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75024 len:8 PRP1 0x0 PRP2 0x0 00:32:36.230 [2024-12-05 10:04:23.840728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.230 [2024-12-05 10:04:23.840740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.230 [2024-12-05 10:04:23.840749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.230 [2024-12-05 10:04:23.840759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75032 len:8 PRP1 0x0 PRP2 0x0 00:32:36.230 [2024-12-05 10:04:23.840771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.230 [2024-12-05 10:04:23.840782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.230 [2024-12-05 10:04:23.840792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.230 [2024-12-05 10:04:23.840802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75040 len:8 PRP1 0x0 PRP2 0x0 00:32:36.230 [2024-12-05 10:04:23.840814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.230 [2024-12-05 10:04:23.840826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.230 [2024-12-05 10:04:23.840835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.230 [2024-12-05 10:04:23.840845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75048 len:8 PRP1 0x0 PRP2 0x0 00:32:36.230 [2024-12-05 10:04:23.840857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.230 [2024-12-05 10:04:23.840868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.230 [2024-12-05 10:04:23.840877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.230 [2024-12-05 10:04:23.840888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75056 len:8 PRP1 0x0 PRP2 0x0 00:32:36.230 [2024-12-05 10:04:23.840900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.230 [2024-12-05 10:04:23.840958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.230 [2024-12-05 10:04:23.840987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.230 [2024-12-05 10:04:23.840999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75064 len:8 PRP1 0x0 PRP2 0x0 00:32:36.230 [2024-12-05 10:04:23.841012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.230 [2024-12-05 10:04:23.841025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.230 [2024-12-05 10:04:23.841036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.230 [2024-12-05 10:04:23.841047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75072 len:8 PRP1 0x0 PRP2 0x0 00:32:36.230 [2024-12-05 10:04:23.841060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.230 [2024-12-05 10:04:23.841071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.230 [2024-12-05 10:04:23.841081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.230 [2024-12-05 10:04:23.841092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75080 len:8 PRP1 0x0 PRP2 0x0 00:32:36.230 [2024-12-05 10:04:23.841104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.230 [2024-12-05 10:04:23.841116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.230 [2024-12-05 10:04:23.841126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.230 [2024-12-05 10:04:23.841137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75088 len:8 PRP1 0x0 PRP2 0x0 00:32:36.230 [2024-12-05 10:04:23.841154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.230 [2024-12-05 10:04:23.841168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.230 [2024-12-05 10:04:23.841178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.230 [2024-12-05 10:04:23.841189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75096 len:8 PRP1 0x0 PRP2 0x0 00:32:36.230 [2024-12-05 10:04:23.841201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.230 [2024-12-05 10:04:23.841214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.230 [2024-12-05 10:04:23.841224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.230 [2024-12-05 10:04:23.841235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75104 len:8 PRP1 0x0 PRP2 0x0 00:32:36.230 [2024-12-05 10:04:23.841247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.230 [2024-12-05 10:04:23.841259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.230 [2024-12-05 10:04:23.841270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.230 [2024-12-05 10:04:23.841281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75112 len:8 PRP1 0x0 PRP2 0x0 00:32:36.230 [2024-12-05 10:04:23.841308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.230 [2024-12-05 10:04:23.841321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.230 [2024-12-05 10:04:23.841330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.230 [2024-12-05 10:04:23.841356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75120 len:8 PRP1 0x0 PRP2 0x0 00:32:36.231 [2024-12-05 10:04:23.841367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.231 [2024-12-05 10:04:23.841379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.231 [2024-12-05 10:04:23.841388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.231 [2024-12-05 10:04:23.841399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75128 len:8 PRP1 0x0 PRP2 0x0 00:32:36.231 [2024-12-05 10:04:23.841411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.231 [2024-12-05 10:04:23.841422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.231 [2024-12-05 10:04:23.841432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.231 [2024-12-05 10:04:23.841442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75136 len:8 PRP1 0x0 PRP2 0x0 00:32:36.231 [2024-12-05 10:04:23.841454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.231 [2024-12-05 10:04:23.841465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.231 [2024-12-05 10:04:23.841475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.231 [2024-12-05 10:04:23.841485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75144 len:8 PRP1 0x0 PRP2 0x0 00:32:36.231 [2024-12-05 10:04:23.841496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.231 [2024-12-05 10:04:23.841507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.231 [2024-12-05 10:04:23.841517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.231 [2024-12-05 10:04:23.841527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75152 len:8 PRP1 0x0 PRP2 0x0 00:32:36.231 [2024-12-05 10:04:23.841543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.231 [2024-12-05 10:04:23.841559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.231 [2024-12-05 10:04:23.841569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.231 [2024-12-05 10:04:23.841580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75160 len:8 PRP1 0x0 PRP2 0x0 00:32:36.231 [2024-12-05 10:04:23.841592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.231 [2024-12-05 10:04:23.841604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.231 [2024-12-05 10:04:23.841613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.231 [2024-12-05 10:04:23.841623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75168 len:8 PRP1 0x0 PRP2 0x0 00:32:36.231 [2024-12-05 10:04:23.841634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.231 [2024-12-05 10:04:23.841646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.231 [2024-12-05 10:04:23.841655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.231 [2024-12-05 10:04:23.841666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75176 len:8 PRP1 0x0 PRP2 0x0 00:32:36.231 [2024-12-05 10:04:23.841677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.494 [2024-12-05 10:04:23.851861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.494 [2024-12-05 10:04:23.851916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.494 [2024-12-05 10:04:23.851932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75184 len:8 PRP1 0x0 PRP2 0x0 00:32:36.494 [2024-12-05 10:04:23.851990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.494 [2024-12-05 10:04:23.852007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.494 [2024-12-05 10:04:23.852017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.494 [2024-12-05 10:04:23.852029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75192 len:8 PRP1 0x0 PRP2 0x0 00:32:36.494 [2024-12-05 10:04:23.852042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.494 [2024-12-05 10:04:23.852070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.494 [2024-12-05 10:04:23.852080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.494 [2024-12-05 10:04:23.852091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75200 len:8 PRP1 0x0 PRP2 0x0 00:32:36.494 [2024-12-05 10:04:23.852103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.494 [2024-12-05 10:04:23.852115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.494 [2024-12-05 10:04:23.852125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.494 [2024-12-05 10:04:23.852136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75208 len:8 PRP1 0x0 PRP2 0x0 00:32:36.494 [2024-12-05 10:04:23.852148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.494 [2024-12-05 10:04:23.852160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.494 [2024-12-05 10:04:23.852170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.494 [2024-12-05 10:04:23.852180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75216 len:8 PRP1 0x0 PRP2 0x0 00:32:36.494 [2024-12-05 10:04:23.852193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.494 [2024-12-05 10:04:23.852205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.494 [2024-12-05 10:04:23.852215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.494 [2024-12-05 10:04:23.852226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75224 len:8 PRP1 0x0 PRP2 0x0 00:32:36.494 [2024-12-05 10:04:23.852238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.494 [2024-12-05 10:04:23.852250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.494 [2024-12-05 10:04:23.852260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.494 [2024-12-05 10:04:23.852271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75232 len:8 PRP1 0x0 PRP2 0x0 00:32:36.494 [2024-12-05 10:04:23.852283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.494 [2024-12-05 10:04:23.852297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.494 [2024-12-05 10:04:23.852306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.494 [2024-12-05 10:04:23.852331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75240 len:8 PRP1 0x0 PRP2 0x0 00:32:36.494 [2024-12-05 10:04:23.852343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.494 [2024-12-05 10:04:23.852354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.494 [2024-12-05 10:04:23.852364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.494 [2024-12-05 10:04:23.852374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75248 len:8 PRP1 0x0 PRP2 0x0 00:32:36.494 [2024-12-05 10:04:23.852385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.494 [2024-12-05 10:04:23.852397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.494 [2024-12-05 10:04:23.852406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.494 [2024-12-05 10:04:23.852416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75256 len:8 PRP1 0x0 PRP2 0x0 00:32:36.494 [2024-12-05 10:04:23.852427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.494 [2024-12-05 10:04:23.852438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.494 [2024-12-05 10:04:23.852447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.494 [2024-12-05 10:04:23.852458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75264 len:8 PRP1 0x0 PRP2 0x0 00:32:36.494 [2024-12-05 10:04:23.852469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.494 [2024-12-05 10:04:23.852480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.494 [2024-12-05 10:04:23.852490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.494 [2024-12-05 10:04:23.852500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75272 len:8 PRP1 0x0 PRP2 0x0 00:32:36.494 [2024-12-05 10:04:23.852511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.494 [2024-12-05 10:04:23.852522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.494 [2024-12-05 10:04:23.852531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.494 [2024-12-05 10:04:23.852541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75280 len:8 PRP1 0x0 PRP2 0x0 00:32:36.494 [2024-12-05 10:04:23.852552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.494 [2024-12-05 10:04:23.852564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.494 [2024-12-05 10:04:23.852574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.494 [2024-12-05 10:04:23.852585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75288 len:8 PRP1 0x0 PRP2 0x0 00:32:36.494 [2024-12-05 10:04:23.852596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.494 [2024-12-05 10:04:23.852608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.494 [2024-12-05 10:04:23.852617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.494 [2024-12-05 10:04:23.852628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75296 len:8 PRP1 0x0 PRP2 0x0 00:32:36.494 [2024-12-05 10:04:23.852640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.494 [2024-12-05 10:04:23.852651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.494 [2024-12-05 10:04:23.852661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.494 [2024-12-05 10:04:23.852671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75304 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.852682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.495 [2024-12-05 10:04:23.852693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.495 [2024-12-05 10:04:23.852714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.495 [2024-12-05 10:04:23.852724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75312 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.852735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.495 [2024-12-05 10:04:23.852764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.495 [2024-12-05 10:04:23.852774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.495 [2024-12-05 10:04:23.852784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75320 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.852796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.495 [2024-12-05 10:04:23.852807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.495 [2024-12-05 10:04:23.852817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.495 [2024-12-05 10:04:23.852828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75328 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.852839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.495 [2024-12-05 10:04:23.852850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.495 [2024-12-05 10:04:23.852859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.495 [2024-12-05 10:04:23.852870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74312 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.852881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.495 [2024-12-05 10:04:23.852892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.495 [2024-12-05 10:04:23.852902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.495 [2024-12-05 10:04:23.852912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74320 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.852923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.495 [2024-12-05 10:04:23.852934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.495 [2024-12-05 10:04:23.852961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.495 [2024-12-05 10:04:23.852984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74328 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.852999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.495 [2024-12-05 10:04:23.853012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.495 [2024-12-05 10:04:23.853022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.495 [2024-12-05 10:04:23.853033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74336 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.853045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.495 [2024-12-05 10:04:23.853056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.495 [2024-12-05 10:04:23.853066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.495 [2024-12-05 10:04:23.853077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74344 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.853089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.495 [2024-12-05 10:04:23.853101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.495 [2024-12-05 10:04:23.853110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.495 [2024-12-05 10:04:23.853121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74352 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.853132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.495 [2024-12-05 10:04:23.853144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.495 [2024-12-05 10:04:23.853153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.495 [2024-12-05 10:04:23.853163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74360 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.853175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.495 [2024-12-05 10:04:23.853186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.495 [2024-12-05 10:04:23.853196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.495 [2024-12-05 10:04:23.853207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74368 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.853219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.495 [2024-12-05 10:04:23.853230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.495 [2024-12-05 10:04:23.853240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.495 [2024-12-05 10:04:23.853251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74376 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.853263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.495 [2024-12-05 10:04:23.853275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.495 [2024-12-05 10:04:23.853284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.495 [2024-12-05 10:04:23.853295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74384 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.853321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.495 [2024-12-05 10:04:23.853333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.495 [2024-12-05 10:04:23.853342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.495 [2024-12-05 10:04:23.853352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74392 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.853364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.495 [2024-12-05 10:04:23.853375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.495 [2024-12-05 10:04:23.853385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.495 [2024-12-05 10:04:23.853395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74400 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.853407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.495 [2024-12-05 10:04:23.853418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.495 [2024-12-05 10:04:23.853427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.495 [2024-12-05 10:04:23.853438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74408 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.853449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.495 [2024-12-05 10:04:23.853461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.495 [2024-12-05 10:04:23.853470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.495 [2024-12-05 10:04:23.853480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74416 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.853492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.495 [2024-12-05 10:04:23.853503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.495 [2024-12-05 10:04:23.853513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.495 [2024-12-05 10:04:23.853523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74424 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.853534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.495 [2024-12-05 10:04:23.853546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.495 [2024-12-05 10:04:23.853555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.495 [2024-12-05 10:04:23.853565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74432 len:8 PRP1 0x0 PRP2 0x0 00:32:36.495 [2024-12-05 10:04:23.853577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.496 [2024-12-05 10:04:23.853588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.496 [2024-12-05 10:04:23.853597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.496 [2024-12-05 10:04:23.853608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74440 len:8 PRP1 0x0 PRP2 0x0 00:32:36.496 [2024-12-05 10:04:23.853620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.496 [2024-12-05 10:04:23.853631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.496 [2024-12-05 10:04:23.853640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.496 [2024-12-05 10:04:23.853650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74448 len:8 PRP1 0x0 PRP2 0x0 00:32:36.496 [2024-12-05 10:04:23.853662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.496 [2024-12-05 10:04:23.853674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.496 [2024-12-05 10:04:23.853683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.496 [2024-12-05 10:04:23.853693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74456 len:8 PRP1 0x0 PRP2 0x0 00:32:36.496 [2024-12-05 10:04:23.853704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.496 [2024-12-05 10:04:23.853715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.496 [2024-12-05 10:04:23.853725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.496 [2024-12-05 10:04:23.853735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74464 len:8 PRP1 0x0 PRP2 0x0 00:32:36.496 [2024-12-05 10:04:23.853747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.496 [2024-12-05 10:04:23.853758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.496 [2024-12-05 10:04:23.853767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.496 [2024-12-05 10:04:23.853778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74472 len:8 PRP1 0x0 PRP2 0x0 00:32:36.496 [2024-12-05 10:04:23.853789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.496 [2024-12-05 10:04:23.853801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.496 [2024-12-05 10:04:23.853810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.496 [2024-12-05 10:04:23.853820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74480 len:8 PRP1 0x0 PRP2 0x0 00:32:36.496 [2024-12-05 10:04:23.853832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.496 [2024-12-05 10:04:23.853843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.496 [2024-12-05 10:04:23.853853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.496 [2024-12-05 10:04:23.853863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74488 len:8 PRP1 0x0 PRP2 0x0 00:32:36.496 [2024-12-05 10:04:23.853874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.496 [2024-12-05 10:04:23.853886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.496 [2024-12-05 10:04:23.853895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.496 [2024-12-05 10:04:23.853912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74496 len:8 PRP1 0x0 PRP2 0x0 00:32:36.496 [2024-12-05 10:04:23.853924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.496 [2024-12-05 10:04:23.853936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.496 [2024-12-05 10:04:23.853972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.496 [2024-12-05 10:04:23.853984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74504 len:8 PRP1 0x0 PRP2 0x0 00:32:36.496 [2024-12-05 10:04:23.853996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.496 [2024-12-05 10:04:23.854009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.496 [2024-12-05 10:04:23.854018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.496 [2024-12-05 10:04:23.854029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74512 len:8 PRP1 0x0 PRP2 0x0 00:32:36.496 [2024-12-05 10:04:23.854041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.496 [2024-12-05 10:04:23.854052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.496 [2024-12-05 10:04:23.854062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.496 [2024-12-05 10:04:23.854079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74520 len:8 PRP1 0x0 PRP2 0x0 00:32:36.496 [2024-12-05 10:04:23.854091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.496 [2024-12-05 10:04:23.854104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.496 [2024-12-05 10:04:23.854113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.496 [2024-12-05 10:04:23.854124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74528 len:8 PRP1 0x0 PRP2 0x0 00:32:36.496 [2024-12-05 10:04:23.854136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.496 [2024-12-05 10:04:23.854147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.496 [2024-12-05 10:04:23.854157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.496 [2024-12-05 10:04:23.854168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74536 len:8 PRP1 0x0 PRP2 0x0 00:32:36.496 [2024-12-05 10:04:23.854179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.496 [2024-12-05 10:04:23.854191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.496 [2024-12-05 10:04:23.854201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.496 [2024-12-05 10:04:23.854211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74544 len:8 PRP1 0x0 PRP2 0x0 00:32:36.496 [2024-12-05 10:04:23.854223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.496 [2024-12-05 10:04:23.854248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.496 [2024-12-05 10:04:23.854258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.496 [2024-12-05 10:04:23.854269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74552 len:8 PRP1 0x0 PRP2 0x0 00:32:36.496 [2024-12-05 10:04:23.854282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.496 [2024-12-05 10:04:23.854294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.496 [2024-12-05 10:04:23.854315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.496 [2024-12-05 10:04:23.854325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74560 len:8 PRP1 0x0 PRP2 0x0 00:32:36.496 [2024-12-05 10:04:23.854381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.496 [2024-12-05 10:04:23.854396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.496 [2024-12-05 10:04:23.854407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.496 [2024-12-05 10:04:23.854418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74568 len:8 PRP1 0x0 PRP2 0x0 00:32:36.496 [2024-12-05 10:04:23.854430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.496 [2024-12-05 10:04:23.854441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.496 [2024-12-05 10:04:23.854451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.497 [2024-12-05 10:04:23.854462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74576 len:8 PRP1 0x0 PRP2 0x0 00:32:36.497 [2024-12-05 10:04:23.854474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.497 [2024-12-05 10:04:23.854485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.497 [2024-12-05 10:04:23.854495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.497 [2024-12-05 10:04:23.854511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74584 len:8 PRP1 0x0 PRP2 0x0 00:32:36.497 [2024-12-05 10:04:23.854523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.497 [2024-12-05 10:04:23.854536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.497 [2024-12-05 10:04:23.854546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.497 [2024-12-05 10:04:23.854557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74592 len:8 PRP1 0x0 PRP2 0x0 00:32:36.497 [2024-12-05 10:04:23.854569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.497 [2024-12-05 10:04:23.854581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.497 [2024-12-05 10:04:23.854590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.497 [2024-12-05 10:04:23.854601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74600 len:8 PRP1 0x0 PRP2 0x0 00:32:36.497 [2024-12-05 10:04:23.854613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.497 [2024-12-05 10:04:23.854624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.497 [2024-12-05 10:04:23.854634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.497 [2024-12-05 10:04:23.854645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74608 len:8 PRP1 0x0 PRP2 0x0 00:32:36.497 [2024-12-05 10:04:23.854657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.497 [2024-12-05 10:04:23.854680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.497 [2024-12-05 10:04:23.854689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.497 [2024-12-05 10:04:23.854704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74616 len:8 PRP1 0x0 PRP2 0x0 00:32:36.497 [2024-12-05 10:04:23.854716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.497 [2024-12-05 10:04:23.854727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.497 [2024-12-05 10:04:23.854737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.497 [2024-12-05 10:04:23.854747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74624 len:8 PRP1 0x0 PRP2 0x0 00:32:36.497 [2024-12-05 10:04:23.854764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.497 [2024-12-05 10:04:23.854786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.497 [2024-12-05 10:04:23.854797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.497 [2024-12-05 10:04:23.854808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74632 len:8 PRP1 0x0 PRP2 0x0 00:32:36.497 [2024-12-05 10:04:23.854820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.497 [2024-12-05 10:04:23.854831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.497 [2024-12-05 10:04:23.854841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.497 [2024-12-05 10:04:23.854851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74640 len:8 PRP1 0x0 PRP2 0x0 00:32:36.497 [2024-12-05 10:04:23.854863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.497 [2024-12-05 10:04:23.854874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.497 [2024-12-05 10:04:23.854883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.497 [2024-12-05 10:04:23.854895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74648 len:8 PRP1 0x0 PRP2 0x0 00:32:36.497 [2024-12-05 10:04:23.854906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.497 [2024-12-05 10:04:23.854918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.497 [2024-12-05 10:04:23.854939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.497 [2024-12-05 10:04:23.854963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74656 len:8 PRP1 0x0 PRP2 0x0 00:32:36.497 [2024-12-05 10:04:23.854976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.497 [2024-12-05 10:04:23.854988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.497 [2024-12-05 10:04:23.854998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.497 [2024-12-05 10:04:23.855008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74664 len:8 PRP1 0x0 PRP2 0x0 00:32:36.497 [2024-12-05 10:04:23.855020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.497 [2024-12-05 10:04:23.855032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.497 [2024-12-05 10:04:23.855041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.497 [2024-12-05 10:04:23.855052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74672 len:8 PRP1 0x0 PRP2 0x0 00:32:36.497 [2024-12-05 10:04:23.855064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.497 [2024-12-05 10:04:23.855076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.497 [2024-12-05 10:04:23.855085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.497 [2024-12-05 10:04:23.855096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74680 len:8 PRP1 0x0 PRP2 0x0 00:32:36.497 [2024-12-05 10:04:23.855108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.497 [2024-12-05 10:04:23.855119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.497 [2024-12-05 10:04:23.855129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.497 [2024-12-05 10:04:23.855139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74688 len:8 PRP1 0x0 PRP2 0x0 00:32:36.497 [2024-12-05 10:04:23.855157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.497 [2024-12-05 10:04:23.855171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.497 [2024-12-05 10:04:23.855181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.497 [2024-12-05 10:04:23.855192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74696 len:8 PRP1 0x0 PRP2 0x0 00:32:36.497 [2024-12-05 10:04:23.855203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.497 [2024-12-05 10:04:23.855215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.497 [2024-12-05 10:04:23.855225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.497 [2024-12-05 10:04:23.855235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74704 len:8 PRP1 0x0 PRP2 0x0 00:32:36.497 [2024-12-05 10:04:23.855247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.497 [2024-12-05 10:04:23.855259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.497 [2024-12-05 10:04:23.855284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.497 [2024-12-05 10:04:23.855295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74712 len:8 PRP1 0x0 PRP2 0x0 00:32:36.497 [2024-12-05 10:04:23.855307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.497 [2024-12-05 10:04:23.855318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.498 [2024-12-05 10:04:23.855328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.498 [2024-12-05 10:04:23.855338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74720 len:8 PRP1 0x0 PRP2 0x0 00:32:36.498 [2024-12-05 10:04:23.855350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.498 [2024-12-05 10:04:23.855361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.498 [2024-12-05 10:04:23.855370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.498 [2024-12-05 10:04:23.855381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74728 len:8 PRP1 0x0 PRP2 0x0 00:32:36.498 [2024-12-05 10:04:23.855392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.498 [2024-12-05 10:04:23.855404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.498 [2024-12-05 10:04:23.855414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.498 [2024-12-05 10:04:23.855424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74736 len:8 PRP1 0x0 PRP2 0x0 00:32:36.498 [2024-12-05 10:04:23.855436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.498 [2024-12-05 10:04:23.855447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.498 [2024-12-05 10:04:23.855456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.498 [2024-12-05 10:04:23.855467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74744 len:8 PRP1 0x0 PRP2 0x0 00:32:36.498 [2024-12-05 10:04:23.855478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.498 [2024-12-05 10:04:23.855489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.498 [2024-12-05 10:04:23.855499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.498 [2024-12-05 10:04:23.855509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74752 len:8 PRP1 0x0 PRP2 0x0 00:32:36.498 [2024-12-05 10:04:23.855521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.498 [2024-12-05 10:04:23.855533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.498 [2024-12-05 10:04:23.855542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.498 [2024-12-05 10:04:23.855552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74760 len:8 PRP1 0x0 PRP2 0x0 00:32:36.498 [2024-12-05 10:04:23.855563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.498 [2024-12-05 10:04:23.855575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.498 [2024-12-05 10:04:23.855584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.498 [2024-12-05 10:04:23.855594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74768 len:8 PRP1 0x0 PRP2 0x0 00:32:36.498 [2024-12-05 10:04:23.855605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.498 [2024-12-05 10:04:23.855616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.498 [2024-12-05 10:04:23.855625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.498 [2024-12-05 10:04:23.855636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74776 len:8 PRP1 0x0 PRP2 0x0 00:32:36.498 [2024-12-05 10:04:23.855647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.498 [2024-12-05 10:04:23.855658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.498 [2024-12-05 10:04:23.855667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.498 [2024-12-05 10:04:23.855677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74784 len:8 PRP1 0x0 PRP2 0x0 00:32:36.498 [2024-12-05 10:04:23.855688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.498 [2024-12-05 10:04:23.855700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.498 [2024-12-05 10:04:23.855709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.498 [2024-12-05 10:04:23.855719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74792 len:8 PRP1 0x0 PRP2 0x0 00:32:36.498 [2024-12-05 10:04:23.855731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.498 [2024-12-05 10:04:23.855742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.498 [2024-12-05 10:04:23.855752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.498 [2024-12-05 10:04:23.855762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74800 len:8 PRP1 0x0 PRP2 0x0 00:32:36.498 [2024-12-05 10:04:23.855774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.498 [2024-12-05 10:04:23.855798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.498 [2024-12-05 10:04:23.855809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.498 [2024-12-05 10:04:23.855819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74808 len:8 PRP1 0x0 PRP2 0x0 00:32:36.498 [2024-12-05 10:04:23.855831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.498 [2024-12-05 10:04:23.855842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.498 [2024-12-05 10:04:23.855851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.498 [2024-12-05 10:04:23.855861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74816 len:8 PRP1 0x0 PRP2 0x0 00:32:36.498 [2024-12-05 10:04:23.855873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.498 [2024-12-05 10:04:23.855884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.498 [2024-12-05 10:04:23.855894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.498 [2024-12-05 10:04:23.855904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74824 len:8 PRP1 0x0 PRP2 0x0 00:32:36.498 [2024-12-05 10:04:23.855915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.498 [2024-12-05 10:04:23.855927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.498 [2024-12-05 10:04:23.855936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.498 [2024-12-05 10:04:23.855963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74832 len:8 PRP1 0x0 PRP2 0x0 00:32:36.498 [2024-12-05 10:04:23.855988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.498 [2024-12-05 10:04:23.856005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.498 [2024-12-05 10:04:23.856016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.498 [2024-12-05 10:04:23.856026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74840 len:8 PRP1 0x0 PRP2 0x0 00:32:36.498 [2024-12-05 10:04:23.856038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.498 [2024-12-05 10:04:23.856050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.498 [2024-12-05 10:04:23.856059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.498 [2024-12-05 10:04:23.856070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74848 len:8 PRP1 0x0 PRP2 0x0 00:32:36.498 [2024-12-05 10:04:23.856082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.498 [2024-12-05 10:04:23.856093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.498 [2024-12-05 10:04:23.856102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.498 [2024-12-05 10:04:23.856113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74856 len:8 PRP1 0x0 PRP2 0x0 00:32:36.498 [2024-12-05 10:04:23.856126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.498 [2024-12-05 10:04:23.856138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.498 10:04:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:32:36.498 [2024-12-05 10:04:23.864258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.498 [2024-12-05 10:04:23.864345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74864 len:8 PRP1 0x0 PRP2 0x0 00:32:36.499 [2024-12-05 10:04:23.864362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.499 [2024-12-05 10:04:23.864381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.499 [2024-12-05 10:04:23.864393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.499 [2024-12-05 10:04:23.864404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74872 len:8 PRP1 0x0 PRP2 0x0 00:32:36.499 [2024-12-05 10:04:23.864416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.499 [2024-12-05 10:04:23.864427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.499 [2024-12-05 10:04:23.864436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.499 [2024-12-05 10:04:23.864446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74880 len:8 PRP1 0x0 PRP2 0x0 00:32:36.499 [2024-12-05 10:04:23.864472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.499 [2024-12-05 10:04:23.864500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.499 [2024-12-05 10:04:23.864527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.499 [2024-12-05 10:04:23.864537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74888 len:8 PRP1 0x0 PRP2 0x0 00:32:36.499 [2024-12-05 10:04:23.864549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.499 [2024-12-05 10:04:23.864561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.499 [2024-12-05 10:04:23.864570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.499 [2024-12-05 10:04:23.864586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74896 len:8 PRP1 0x0 PRP2 0x0 00:32:36.499 [2024-12-05 10:04:23.864598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.499 [2024-12-05 10:04:23.864938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:32:36.499 [2024-12-05 10:04:23.865266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:36.499 [2024-12-05 10:04:23.865458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.499 [2024-12-05 10:04:23.865502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:32:36.499 [2024-12-05 10:04:23.865519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:32:36.499 [2024-12-05 10:04:23.865548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:32:36.499 [2024-12-05 10:04:23.865582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:32:36.499 [2024-12-05 10:04:23.865598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:32:36.499 [2024-12-05 10:04:23.865612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:36.499 [2024-12-05 10:04:23.865628] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:32:36.499 [2024-12-05 10:04:23.865642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:38.388 4644.50 IOPS, 18.14 MiB/s [2024-12-05T10:04:26.017Z] 3096.33 IOPS, 12.10 MiB/s [2024-12-05T10:04:26.017Z] [2024-12-05 10:04:25.865822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.388 [2024-12-05 10:04:25.865883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:32:38.388 [2024-12-05 10:04:25.865905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:32:38.388 [2024-12-05 10:04:25.865974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:32:38.388 [2024-12-05 10:04:25.866009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:32:38.388 [2024-12-05 10:04:25.866025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:32:38.388 [2024-12-05 10:04:25.866041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:38.388 [2024-12-05 10:04:25.866058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:32:38.388 [2024-12-05 10:04:25.866073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:38.388 10:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:32:38.388 10:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:32:38.388 10:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:38.646 10:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:32:38.646 10:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:32:38.646 10:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:32:38.646 10:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:32:38.903 10:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:32:38.903 10:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:32:40.094 2322.25 IOPS, 9.07 MiB/s [2024-12-05T10:04:27.982Z] 1857.80 IOPS, 7.26 MiB/s [2024-12-05T10:04:27.982Z] [2024-12-05 10:04:27.866186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.353 [2024-12-05 10:04:27.866234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:32:40.353 [2024-12-05 10:04:27.866252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:32:40.353 [2024-12-05 10:04:27.866277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:32:40.353 [2024-12-05 10:04:27.866300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:32:40.353 [2024-12-05 10:04:27.866325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:32:40.353 [2024-12-05 10:04:27.866339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:40.353 [2024-12-05 10:04:27.866361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:32:40.353 [2024-12-05 10:04:27.866376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:42.222 1548.17 IOPS, 6.05 MiB/s [2024-12-05T10:04:30.110Z] 1327.00 IOPS, 5.18 MiB/s [2024-12-05T10:04:30.110Z] [2024-12-05 10:04:29.866420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:42.481 [2024-12-05 10:04:29.866460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:32:42.481 [2024-12-05 10:04:29.866475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:32:42.481 [2024-12-05 10:04:29.866500] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:32:42.481 [2024-12-05 10:04:29.866515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:32:43.412 1161.12 IOPS, 4.54 MiB/s 00:32:43.412 Latency(us) 00:32:43.412 [2024-12-05T10:04:31.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.412 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:43.412 Verification LBA range: start 0x0 length 0x4000 00:32:43.412 NVMe0n1 : 8.16 1138.17 4.45 15.68 0.00 110870.24 2442.71 7046430.72 00:32:43.412 [2024-12-05T10:04:31.041Z] =================================================================================================================== 00:32:43.412 [2024-12-05T10:04:31.041Z] Total : 1138.17 4.45 15.68 0.00 110870.24 2442.71 7046430.72 00:32:43.412 { 00:32:43.412 "results": [ 00:32:43.412 { 00:32:43.412 "job": "NVMe0n1", 00:32:43.412 "core_mask": "0x4", 00:32:43.412 "workload": "verify", 00:32:43.412 "status": "finished", 00:32:43.412 "verify_range": { 00:32:43.412 "start": 0, 00:32:43.412 "length": 16384 00:32:43.412 }, 00:32:43.412 "queue_depth": 128, 00:32:43.412 "io_size": 4096, 00:32:43.412 "runtime": 8.161376, 00:32:43.412 "iops": 1138.1659171198582, 00:32:43.412 "mibps": 4.445960613749446, 00:32:43.412 "io_failed": 128, 00:32:43.412 "io_timeout": 0, 00:32:43.412 "avg_latency_us": 110870.24424918185, 00:32:43.412 "min_latency_us": 2442.7054545454544, 00:32:43.412 "max_latency_us": 7046430.72 00:32:43.412 } 00:32:43.412 ], 00:32:43.412 "core_count": 1 00:32:43.412 } 00:32:43.979 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:32:43.979 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:43.979 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:32:44.238 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:32:44.238 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:32:44.238 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:32:44.238 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:32:44.497 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:32:44.497 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 107810 00:32:44.497 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 107764 00:32:44.497 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 107764 ']' 00:32:44.497 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 107764 00:32:44.497 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:32:44.497 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:44.497 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107764 00:32:44.498 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:32:44.498 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:32:44.498 killing process with pid 107764 00:32:44.498 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107764' 00:32:44.498 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 107764 00:32:44.498 Received shutdown signal, test time was about 9.265434 seconds 00:32:44.498 00:32:44.498 Latency(us) 00:32:44.498 [2024-12-05T10:04:32.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.498 [2024-12-05T10:04:32.127Z] =================================================================================================================== 00:32:44.498 [2024-12-05T10:04:32.127Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:44.498 10:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 107764 00:32:45.434 10:04:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:45.693 [2024-12-05 10:04:33.122927] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:45.693 10:04:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:32:45.693 10:04:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=107972 00:32:45.693 10:04:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 107972 /var/tmp/bdevperf.sock 00:32:45.693 10:04:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 107972 ']' 00:32:45.693 10:04:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:45.693 10:04:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:45.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:45.693 10:04:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:45.693 10:04:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:45.693 10:04:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:45.693 [2024-12-05 10:04:33.238286] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:32:45.693 [2024-12-05 10:04:33.238474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107972 ] 00:32:45.953 [2024-12-05 10:04:33.408293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.953 [2024-12-05 10:04:33.521884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:46.891 10:04:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:46.891 10:04:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:32:46.891 10:04:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:46.891 10:04:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:32:47.150 NVMe0n1 00:32:47.150 10:04:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=108018 00:32:47.150 10:04:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:47.150 10:04:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:32:47.409 Running I/O for 10 seconds... 00:32:48.350 10:04:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:48.612 8214.00 IOPS, 32.09 MiB/s [2024-12-05T10:04:36.241Z] [2024-12-05 10:04:35.979954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.980939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:48.612 [2024-12-05 10:04:35.982012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.612 [2024-12-05 10:04:35.982079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.613 [2024-12-05 10:04:35.982743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.613 [2024-12-05 10:04:35.982769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.613 [2024-12-05 10:04:35.982812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.613 [2024-12-05 10:04:35.982837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.613 [2024-12-05 10:04:35.982861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.613 [2024-12-05 10:04:35.982885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.613 [2024-12-05 10:04:35.982912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.613 [2024-12-05 10:04:35.982936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.613 [2024-12-05 10:04:35.982960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.613 [2024-12-05 10:04:35.982984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.982997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.613 [2024-12-05 10:04:35.983021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.983038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.613 [2024-12-05 10:04:35.983051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.983065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.613 [2024-12-05 10:04:35.983076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.983089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.613 [2024-12-05 10:04:35.983101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.983114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.613 [2024-12-05 10:04:35.983126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.983138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.613 [2024-12-05 10:04:35.983150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.983163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.613 [2024-12-05 10:04:35.983174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.983188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.613 [2024-12-05 10:04:35.983199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.613 [2024-12-05 10:04:35.983214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.983984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.983995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.984008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.614 [2024-12-05 10:04:35.984020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.984033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.614 [2024-12-05 10:04:35.984044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.984057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.614 [2024-12-05 10:04:35.984068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.984081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.614 [2024-12-05 10:04:35.984092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.984105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.614 [2024-12-05 10:04:35.984116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.984128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.614 [2024-12-05 10:04:35.984140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.984152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.614 [2024-12-05 10:04:35.984163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.984176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.614 [2024-12-05 10:04:35.984194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.984208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.984224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.984238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.614 [2024-12-05 10:04:35.984249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.614 [2024-12-05 10:04:35.984263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.984978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.984991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.615 [2024-12-05 10:04:35.985008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.985045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:48.615 [2024-12-05 10:04:35.985067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72000 len:8 PRP1 0x0 PRP2 0x0 00:32:48.615 [2024-12-05 10:04:35.985093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.985120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:48.615 [2024-12-05 10:04:35.985131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:48.615 [2024-12-05 10:04:35.985143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72008 len:8 PRP1 0x0 PRP2 0x0 00:32:48.615 [2024-12-05 10:04:35.985156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.985179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:48.615 [2024-12-05 10:04:35.985189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:48.615 [2024-12-05 10:04:35.985200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72016 len:8 PRP1 0x0 PRP2 0x0 00:32:48.615 [2024-12-05 10:04:35.985212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.985222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:48.615 [2024-12-05 10:04:35.985231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:48.615 [2024-12-05 10:04:35.985240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72024 len:8 PRP1 0x0 PRP2 0x0 00:32:48.615 [2024-12-05 10:04:35.985251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.985261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:48.615 [2024-12-05 10:04:35.985270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:48.615 [2024-12-05 10:04:35.985280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72032 len:8 PRP1 0x0 PRP2 0x0 00:32:48.615 [2024-12-05 10:04:35.985290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.985300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:48.615 [2024-12-05 10:04:35.985309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:48.615 [2024-12-05 10:04:35.985319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72040 len:8 PRP1 0x0 PRP2 0x0 00:32:48.615 [2024-12-05 10:04:35.985329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.615 [2024-12-05 10:04:35.985339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:48.615 [2024-12-05 10:04:35.985348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:48.616 [2024-12-05 10:04:35.985357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72048 len:8 PRP1 0x0 PRP2 0x0 00:32:48.616 [2024-12-05 10:04:35.985369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.616 [2024-12-05 10:04:35.985379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:48.616 [2024-12-05 10:04:35.985388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:48.616 [2024-12-05 10:04:35.985397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72056 len:8 PRP1 0x0 PRP2 0x0 00:32:48.616 [2024-12-05 10:04:35.985408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.616 [2024-12-05 10:04:35.985425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:48.616 [2024-12-05 10:04:35.985434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:48.616 [2024-12-05 10:04:35.985449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72064 len:8 PRP1 0x0 PRP2 0x0 00:32:48.616 [2024-12-05 10:04:35.985460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.616 [2024-12-05 10:04:35.985470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:48.616 [2024-12-05 10:04:35.985479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:48.616 [2024-12-05 10:04:35.985489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72072 len:8 PRP1 0x0 PRP2 0x0 00:32:48.616 [2024-12-05 10:04:35.985499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.616 [2024-12-05 10:04:35.985511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:48.616 [2024-12-05 10:04:35.985520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:48.616 [2024-12-05 10:04:35.985530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72080 len:8 PRP1 0x0 PRP2 0x0 00:32:48.616 [2024-12-05 10:04:35.985540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.616 [2024-12-05 10:04:35.985551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:48.616 [2024-12-05 10:04:35.985560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:48.616 [2024-12-05 10:04:35.985570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72088 len:8 PRP1 0x0 PRP2 0x0 00:32:48.616 [2024-12-05 10:04:35.985581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.616 [2024-12-05 10:04:35.985591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:48.616 [2024-12-05 10:04:35.985600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:48.616 [2024-12-05 10:04:35.985609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72096 len:8 PRP1 0x0 PRP2 0x0 00:32:48.616 [2024-12-05 10:04:35.985620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.616 [2024-12-05 10:04:35.985630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:48.616 [2024-12-05 10:04:35.985639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:48.616 [2024-12-05 10:04:35.985649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72104 len:8 PRP1 0x0 PRP2 0x0 00:32:48.616 [2024-12-05 10:04:35.985659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.616 [2024-12-05 10:04:35.985669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:48.616 [2024-12-05 10:04:35.985678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:48.616 [2024-12-05 10:04:35.985688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72112 len:8 PRP1 0x0 PRP2 0x0 00:32:48.616 [2024-12-05 10:04:35.985699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.616 [2024-12-05 10:04:35.985708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:48.616 [2024-12-05 10:04:35.985717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:48.616 [2024-12-05 10:04:35.997425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72120 len:8 PRP1 0x0 PRP2 0x0 00:32:48.616 [2024-12-05 10:04:35.997461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.616 [2024-12-05 10:04:35.997482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:48.616 [2024-12-05 10:04:35.997495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:48.616 [2024-12-05 10:04:35.997507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72128 len:8 PRP1 0x0 PRP2 0x0 00:32:48.616 [2024-12-05 10:04:35.997518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.616 [2024-12-05 10:04:35.997529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:48.616 [2024-12-05 10:04:35.997538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:48.616 [2024-12-05 10:04:35.997548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72136 len:8 PRP1 0x0 PRP2 0x0 00:32:48.616 [2024-12-05 10:04:35.997559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.616 [2024-12-05 10:04:35.997570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:48.616 [2024-12-05 10:04:35.997579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:48.616 [2024-12-05 10:04:35.997589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72144 len:8 PRP1 0x0 PRP2 0x0 00:32:48.616 [2024-12-05 10:04:35.997599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.616 [2024-12-05 10:04:35.997959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:48.616 [2024-12-05 10:04:35.997987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.616 [2024-12-05 10:04:35.998005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:48.616 [2024-12-05 10:04:35.998016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.616 [2024-12-05 10:04:35.998029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:48.616 [2024-12-05 10:04:35.998040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.616 [2024-12-05 10:04:35.998052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:48.616 [2024-12-05 10:04:35.998062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.616 [2024-12-05 10:04:35.998074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:32:48.616 [2024-12-05 10:04:35.998278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:48.616 [2024-12-05 10:04:35.998325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:32:48.616 [2024-12-05 10:04:35.998505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.616 [2024-12-05 10:04:35.998545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:32:48.616 [2024-12-05 10:04:35.998563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:32:48.616 [2024-12-05 10:04:35.998590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:32:48.616 [2024-12-05 10:04:35.998614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:48.616 [2024-12-05 10:04:35.998636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:48.616 [2024-12-05 10:04:35.998663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:48.616 [2024-12-05 10:04:35.998706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:48.616 [2024-12-05 10:04:35.998721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:48.617 10:04:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:32:49.553 4445.50 IOPS, 17.37 MiB/s [2024-12-05T10:04:37.182Z] [2024-12-05 10:04:36.998832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.553 [2024-12-05 10:04:36.998877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:32:49.553 [2024-12-05 10:04:36.998896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:32:49.553 [2024-12-05 10:04:36.998921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:32:49.553 [2024-12-05 10:04:36.998959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:49.553 [2024-12-05 10:04:36.998973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:49.553 [2024-12-05 10:04:36.998986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:49.553 [2024-12-05 10:04:36.998999] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:49.553 [2024-12-05 10:04:36.999011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:49.553 10:04:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:49.812 [2024-12-05 10:04:37.249487] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:49.812 10:04:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 108018 00:32:50.643 2963.67 IOPS, 11.58 MiB/s [2024-12-05T10:04:38.272Z] [2024-12-05 10:04:38.013297] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:52.516 2222.75 IOPS, 8.68 MiB/s [2024-12-05T10:04:41.081Z] 3245.60 IOPS, 12.68 MiB/s [2024-12-05T10:04:42.018Z] 4043.33 IOPS, 15.79 MiB/s [2024-12-05T10:04:42.955Z] 4629.14 IOPS, 18.08 MiB/s [2024-12-05T10:04:44.333Z] 5067.62 IOPS, 19.80 MiB/s [2024-12-05T10:04:45.269Z] 5437.11 IOPS, 21.24 MiB/s [2024-12-05T10:04:45.269Z] 5720.70 IOPS, 22.35 MiB/s 00:32:57.640 Latency(us) 00:32:57.640 [2024-12-05T10:04:45.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.640 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:57.640 Verification LBA range: start 0x0 length 0x4000 00:32:57.640 NVMe0n1 : 10.01 5729.01 22.38 0.00 0.00 22312.21 2278.87 3035150.89 00:32:57.640 [2024-12-05T10:04:45.269Z] =================================================================================================================== 00:32:57.640 [2024-12-05T10:04:45.269Z] Total : 5729.01 22.38 0.00 0.00 22312.21 2278.87 3035150.89 00:32:57.640 { 00:32:57.640 "results": [ 00:32:57.640 { 00:32:57.640 "job": "NVMe0n1", 00:32:57.640 "core_mask": "0x4", 00:32:57.640 "workload": "verify", 00:32:57.640 "status": "finished", 00:32:57.640 "verify_range": { 00:32:57.640 "start": 0, 00:32:57.640 "length": 16384 00:32:57.640 }, 00:32:57.640 "queue_depth": 128, 00:32:57.640 "io_size": 4096, 00:32:57.640 "runtime": 10.00783, 00:32:57.640 "iops": 5729.014181895575, 00:32:57.640 "mibps": 22.37896164802959, 00:32:57.640 "io_failed": 0, 00:32:57.640 "io_timeout": 0, 00:32:57.640 "avg_latency_us": 22312.20639146325, 00:32:57.640 "min_latency_us": 2278.8654545454547, 00:32:57.640 "max_latency_us": 3035150.8945454545 00:32:57.640 } 00:32:57.640 ], 00:32:57.640 "core_count": 1 00:32:57.640 } 00:32:57.640 10:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=108131 00:32:57.640 10:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:57.640 10:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:32:57.640 Running I/O for 10 seconds... 00:32:58.577 10:04:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:58.577 8416.00 IOPS, 32.88 MiB/s [2024-12-05T10:04:46.206Z] [2024-12-05 10:04:46.193011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.577 [2024-12-05 10:04:46.193502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.193996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.578 [2024-12-05 10:04:46.194386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.579 [2024-12-05 10:04:46.194396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.579 [2024-12-05 10:04:46.194407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.579 [2024-12-05 10:04:46.194421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.579 [2024-12-05 10:04:46.194431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.579 [2024-12-05 10:04:46.194441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.579 [2024-12-05 10:04:46.194451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.579 [2024-12-05 10:04:46.194461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.579 [2024-12-05 10:04:46.194471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.579 [2024-12-05 10:04:46.194482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.579 [2024-12-05 10:04:46.194492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.579 [2024-12-05 10:04:46.194502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.579 [2024-12-05 10:04:46.194512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:58.579 [2024-12-05 10:04:46.195472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.195548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.195575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.195589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.195605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.195617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.195630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.195641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.195655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.195666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.195680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.195691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.195704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.195715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.195739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.195751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.195769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.195780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.195793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.195834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.195849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.195861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.195874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.195886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.195899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.195910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.195923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.195947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.195963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.195975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.195988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.195999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.196012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.196024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.196038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.196050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.196063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.196074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.196087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.196099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.196111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.196122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.196135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.196146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.196159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.196170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.196183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.196197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.196211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.196222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.196235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.196246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.196259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.196270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.196283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.196295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.196308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.196326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.196339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.196351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.196364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.196375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.196387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.196399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.196411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.196423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.579 [2024-12-05 10:04:46.196436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.579 [2024-12-05 10:04:46.196446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.580 [2024-12-05 10:04:46.196471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.580 [2024-12-05 10:04:46.196495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.580 [2024-12-05 10:04:46.196519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.580 [2024-12-05 10:04:46.196542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.580 [2024-12-05 10:04:46.196566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.580 [2024-12-05 10:04:46.196589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.580 [2024-12-05 10:04:46.196614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.196640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.196664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.196689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.196715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.196740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.196764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.196788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.196812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.196837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.196861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.196887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.196911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.196945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.196972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.196986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.196999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.197012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.197023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.197036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.197048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.197062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.197073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.197087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.197103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.197117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.197128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.197141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.197151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.197164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.197175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.197188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.197199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.197212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.197223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.197237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.197248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.197260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.197271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.197285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.197296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.197309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.197320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.197333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.197344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.580 [2024-12-05 10:04:46.197357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.580 [2024-12-05 10:04:46.197368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.197970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.197988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.198000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.198014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.198025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.198039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.198050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.198065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.198076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.198090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.198102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.198115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.198127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.198140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.198152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.198181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.198192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.198205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.198231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.198260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.198271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.198284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.198295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.198309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.198320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.198332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.581 [2024-12-05 10:04:46.198343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.198426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.581 [2024-12-05 10:04:46.198444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77368 len:8 PRP1 0x0 PRP2 0x0 00:32:58.581 [2024-12-05 10:04:46.198457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.198480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.581 [2024-12-05 10:04:46.198492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.581 [2024-12-05 10:04:46.198506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77376 len:8 PRP1 0x0 PRP2 0x0 00:32:58.581 [2024-12-05 10:04:46.198533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.581 [2024-12-05 10:04:46.198544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.581 [2024-12-05 10:04:46.198554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.582 [2024-12-05 10:04:46.198564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77384 len:8 PRP1 0x0 PRP2 0x0 00:32:58.582 [2024-12-05 10:04:46.198576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.582 [2024-12-05 10:04:46.198603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.582 [2024-12-05 10:04:46.198612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.582 [2024-12-05 10:04:46.198623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77392 len:8 PRP1 0x0 PRP2 0x0 00:32:58.582 [2024-12-05 10:04:46.198636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.582 [2024-12-05 10:04:46.198648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.582 [2024-12-05 10:04:46.198657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.582 [2024-12-05 10:04:46.198668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77400 len:8 PRP1 0x0 PRP2 0x0 00:32:58.582 [2024-12-05 10:04:46.198679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.582 [2024-12-05 10:04:46.198706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.582 [2024-12-05 10:04:46.198729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.582 [2024-12-05 10:04:46.198756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77408 len:8 PRP1 0x0 PRP2 0x0 00:32:58.582 [2024-12-05 10:04:46.198766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.582 [2024-12-05 10:04:46.198792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.582 [2024-12-05 10:04:46.198801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.582 [2024-12-05 10:04:46.198811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77416 len:8 PRP1 0x0 PRP2 0x0 00:32:58.582 [2024-12-05 10:04:46.198823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.582 [2024-12-05 10:04:46.198834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.582 [2024-12-05 10:04:46.198844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.582 [2024-12-05 10:04:46.198854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77424 len:8 PRP1 0x0 PRP2 0x0 00:32:58.582 [2024-12-05 10:04:46.198865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.582 [2024-12-05 10:04:46.198876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.582 [2024-12-05 10:04:46.198885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.582 [2024-12-05 10:04:46.198910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77432 len:8 PRP1 0x0 PRP2 0x0 00:32:58.582 [2024-12-05 10:04:46.198921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.582 [2024-12-05 10:04:46.198933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.582 [2024-12-05 10:04:46.198943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.582 [2024-12-05 10:04:46.198956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77440 len:8 PRP1 0x0 PRP2 0x0 00:32:58.582 [2024-12-05 10:04:46.198967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.582 [2024-12-05 10:04:46.198978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.582 [2024-12-05 10:04:46.199001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.582 [2024-12-05 10:04:46.199011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77448 len:8 PRP1 0x0 PRP2 0x0 00:32:58.582 [2024-12-05 10:04:46.199022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.582 [2024-12-05 10:04:46.199032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.582 [2024-12-05 10:04:46.199040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.582 [2024-12-05 10:04:46.199049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77456 len:8 PRP1 0x0 PRP2 0x0 00:32:58.582 [2024-12-05 10:04:46.199060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.582 [2024-12-05 10:04:46.199071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.582 [2024-12-05 10:04:46.199092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.582 [2024-12-05 10:04:46.199103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77464 len:8 PRP1 0x0 PRP2 0x0 00:32:58.582 [2024-12-05 10:04:46.199113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.582 [2024-12-05 10:04:46.199123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.582 [2024-12-05 10:04:46.199132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.582 [2024-12-05 10:04:46.199141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77472 len:8 PRP1 0x0 PRP2 0x0 00:32:58.582 [2024-12-05 10:04:46.199152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.582 [2024-12-05 10:04:46.199162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.582 [2024-12-05 10:04:46.199170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.582 [2024-12-05 10:04:46.199180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77480 len:8 PRP1 0x0 PRP2 0x0 00:32:58.582 [2024-12-05 10:04:46.199190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.582 [2024-12-05 10:04:46.199201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.582 [2024-12-05 10:04:46.199209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.582 [2024-12-05 10:04:46.199218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77488 len:8 PRP1 0x0 PRP2 0x0 00:32:58.582 [2024-12-05 10:04:46.199229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.582 [2024-12-05 10:04:46.199253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.582 [2024-12-05 10:04:46.199262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.582 [2024-12-05 10:04:46.199272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77496 len:8 PRP1 0x0 PRP2 0x0 00:32:58.582 [2024-12-05 10:04:46.199282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.842 [2024-12-05 10:04:46.210506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.842 [2024-12-05 10:04:46.210589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.842 [2024-12-05 10:04:46.210630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77504 len:8 PRP1 0x0 PRP2 0x0 00:32:58.842 [2024-12-05 10:04:46.210679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.842 [2024-12-05 10:04:46.210693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.842 [2024-12-05 10:04:46.210702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.842 [2024-12-05 10:04:46.210713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77512 len:8 PRP1 0x0 PRP2 0x0 00:32:58.842 [2024-12-05 10:04:46.210723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.842 [2024-12-05 10:04:46.210734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.842 [2024-12-05 10:04:46.210743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.842 [2024-12-05 10:04:46.210753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77520 len:8 PRP1 0x0 PRP2 0x0 00:32:58.842 [2024-12-05 10:04:46.210763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.842 [2024-12-05 10:04:46.210774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.842 [2024-12-05 10:04:46.210792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.842 [2024-12-05 10:04:46.210801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77528 len:8 PRP1 0x0 PRP2 0x0 00:32:58.842 [2024-12-05 10:04:46.210818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.842 [2024-12-05 10:04:46.210835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.842 [2024-12-05 10:04:46.210844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.842 [2024-12-05 10:04:46.210853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77536 len:8 PRP1 0x0 PRP2 0x0 00:32:58.842 [2024-12-05 10:04:46.210865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.842 [2024-12-05 10:04:46.211167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:58.842 [2024-12-05 10:04:46.211198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.842 [2024-12-05 10:04:46.211215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:58.842 [2024-12-05 10:04:46.211227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.842 [2024-12-05 10:04:46.211239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:58.842 [2024-12-05 10:04:46.211250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.842 [2024-12-05 10:04:46.211262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:58.842 [2024-12-05 10:04:46.211273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.842 [2024-12-05 10:04:46.211284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:32:58.842 [2024-12-05 10:04:46.211478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:58.842 [2024-12-05 10:04:46.211511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:32:58.842 [2024-12-05 10:04:46.211615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.842 [2024-12-05 10:04:46.211651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:32:58.842 [2024-12-05 10:04:46.211667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:32:58.842 [2024-12-05 10:04:46.211692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:32:58.842 [2024-12-05 10:04:46.211714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:32:58.842 [2024-12-05 10:04:46.211727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:32:58.842 [2024-12-05 10:04:46.211742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:58.842 [2024-12-05 10:04:46.211756] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:32:58.842 [2024-12-05 10:04:46.211769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:58.842 10:04:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:32:59.778 4782.50 IOPS, 18.68 MiB/s [2024-12-05T10:04:47.407Z] [2024-12-05 10:04:47.211873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.778 [2024-12-05 10:04:47.211921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:32:59.778 [2024-12-05 10:04:47.211952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:32:59.778 [2024-12-05 10:04:47.211978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:32:59.778 [2024-12-05 10:04:47.212000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:32:59.778 [2024-12-05 10:04:47.212014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:32:59.778 [2024-12-05 10:04:47.212042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:59.778 [2024-12-05 10:04:47.212056] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:32:59.778 [2024-12-05 10:04:47.212075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:33:00.714 3188.33 IOPS, 12.45 MiB/s [2024-12-05T10:04:48.343Z] [2024-12-05 10:04:48.212173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.714 [2024-12-05 10:04:48.212218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:33:00.714 [2024-12-05 10:04:48.212241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:33:00.714 [2024-12-05 10:04:48.212265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:33:00.714 [2024-12-05 10:04:48.212287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:33:00.714 [2024-12-05 10:04:48.212299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:33:00.714 [2024-12-05 10:04:48.212311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:33:00.714 [2024-12-05 10:04:48.212323] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:33:00.714 [2024-12-05 10:04:48.212335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:33:01.650 2391.25 IOPS, 9.34 MiB/s [2024-12-05T10:04:49.279Z] [2024-12-05 10:04:49.215047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.650 [2024-12-05 10:04:49.215103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:33:01.650 [2024-12-05 10:04:49.215128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:33:01.650 [2024-12-05 10:04:49.215336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:33:01.650 [2024-12-05 10:04:49.215546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:33:01.650 [2024-12-05 10:04:49.215569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:33:01.650 [2024-12-05 10:04:49.215582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:33:01.650 [2024-12-05 10:04:49.215594] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:33:01.650 [2024-12-05 10:04:49.215606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:33:01.650 10:04:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:01.908 [2024-12-05 10:04:49.437557] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:01.908 10:04:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 108131 00:33:02.735 1913.00 IOPS, 7.47 MiB/s [2024-12-05T10:04:50.364Z] [2024-12-05 10:04:50.247660] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:33:04.686 2700.50 IOPS, 10.55 MiB/s [2024-12-05T10:04:53.249Z] 3566.71 IOPS, 13.93 MiB/s [2024-12-05T10:04:54.185Z] 4202.88 IOPS, 16.42 MiB/s [2024-12-05T10:04:55.121Z] 4696.89 IOPS, 18.35 MiB/s [2024-12-05T10:04:55.121Z] 5102.40 IOPS, 19.93 MiB/s 00:33:07.492 Latency(us) 00:33:07.492 [2024-12-05T10:04:55.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.492 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:07.492 Verification LBA range: start 0x0 length 0x4000 00:33:07.492 NVMe0n1 : 10.01 5110.72 19.96 4466.46 0.00 13340.42 2293.76 3035150.89 00:33:07.492 [2024-12-05T10:04:55.121Z] =================================================================================================================== 00:33:07.492 [2024-12-05T10:04:55.121Z] Total : 5110.72 19.96 4466.46 0.00 13340.42 0.00 3035150.89 00:33:07.492 { 00:33:07.492 "results": [ 00:33:07.492 { 00:33:07.492 "job": "NVMe0n1", 00:33:07.492 "core_mask": "0x4", 00:33:07.492 "workload": "verify", 00:33:07.492 "status": "finished", 00:33:07.492 "verify_range": { 00:33:07.492 "start": 0, 00:33:07.492 "length": 16384 00:33:07.492 }, 00:33:07.492 "queue_depth": 128, 00:33:07.492 "io_size": 4096, 00:33:07.492 "runtime": 10.008371, 00:33:07.492 "iops": 5110.721814768857, 00:33:07.492 "mibps": 19.96375708894085, 00:33:07.492 "io_failed": 44702, 00:33:07.492 "io_timeout": 0, 00:33:07.492 "avg_latency_us": 13340.417220791143, 00:33:07.492 "min_latency_us": 2293.76, 00:33:07.492 "max_latency_us": 3035150.8945454545 00:33:07.492 } 00:33:07.492 ], 00:33:07.492 "core_count": 1 00:33:07.492 } 00:33:07.492 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 107972 00:33:07.492 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 107972 ']' 00:33:07.492 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 107972 00:33:07.492 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:33:07.492 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:07.492 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107972 00:33:07.750 killing process with pid 107972 00:33:07.750 Received shutdown signal, test time was about 10.000000 seconds 00:33:07.750 00:33:07.750 Latency(us) 00:33:07.750 [2024-12-05T10:04:55.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.750 [2024-12-05T10:04:55.379Z] =================================================================================================================== 00:33:07.750 [2024-12-05T10:04:55.379Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:07.750 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:07.750 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:07.750 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107972' 00:33:07.750 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 107972 00:33:07.750 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 107972 00:33:08.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:08.684 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=108259 00:33:08.684 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:33:08.684 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 108259 /var/tmp/bdevperf.sock 00:33:08.684 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 108259 ']' 00:33:08.684 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:08.684 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:08.684 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:08.684 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:08.684 10:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:08.684 [2024-12-05 10:04:56.112538] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:33:08.684 [2024-12-05 10:04:56.112742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108259 ] 00:33:08.684 [2024-12-05 10:04:56.290990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.942 [2024-12-05 10:04:56.402836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:09.509 10:04:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:09.509 10:04:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:33:09.509 10:04:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 108259 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:33:09.509 10:04:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=108287 00:33:09.509 10:04:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:33:09.768 10:04:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:33:10.025 NVMe0n1 00:33:10.025 10:04:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=108335 00:33:10.025 10:04:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:10.025 10:04:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:33:10.025 Running I/O for 10 seconds... 00:33:10.960 10:04:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:11.222 15839.00 IOPS, 61.87 MiB/s [2024-12-05T10:04:58.851Z] [2024-12-05 10:04:58.804084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.222 [2024-12-05 10:04:58.804566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.804996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.223 [2024-12-05 10:04:58.805730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.805743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.805755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.805768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.805780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.805793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.805805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.805818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.805831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.805844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.805856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.805869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.805882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.805894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.805908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.805922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.805935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.805980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.806003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.806016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:33:11.224 [2024-12-05 10:04:58.806946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.224 [2024-12-05 10:04:58.807721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.224 [2024-12-05 10:04:58.807734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.807745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.807757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.807768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.807782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.807794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.807807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.807818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.807831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.807842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.807854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.807865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.807877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.807889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.807901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.807912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.807924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.807963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.807980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.807991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.225 [2024-12-05 10:04:58.808723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.225 [2024-12-05 10:04:58.808735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.808746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.808758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.808768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.808781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.808792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.808804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.808814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.808826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.808837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.808850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.808860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.808872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.808883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.808896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.808906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.808919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.808945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.808963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.808974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.808987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.808999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.226 [2024-12-05 10:04:58.809691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.226 [2024-12-05 10:04:58.809717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.227 [2024-12-05 10:04:58.809728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.227 [2024-12-05 10:04:58.809741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.227 [2024-12-05 10:04:58.809751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.227 [2024-12-05 10:04:58.809764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.227 [2024-12-05 10:04:58.809779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.227 [2024-12-05 10:04:58.809792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.227 [2024-12-05 10:04:58.809803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.227 [2024-12-05 10:04:58.809815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.227 [2024-12-05 10:04:58.809826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.227 [2024-12-05 10:04:58.809839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.227 [2024-12-05 10:04:58.809849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.227 [2024-12-05 10:04:58.809861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:27296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.227 [2024-12-05 10:04:58.809871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.227 [2024-12-05 10:04:58.809883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.227 [2024-12-05 10:04:58.809893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.227 [2024-12-05 10:04:58.809911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.227 [2024-12-05 10:04:58.809922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.227 [2024-12-05 10:04:58.809949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.227 [2024-12-05 10:04:58.809962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.227 [2024-12-05 10:04:58.809974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.227 [2024-12-05 10:04:58.809984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.227 [2024-12-05 10:04:58.809997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.227 [2024-12-05 10:04:58.810007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.227 [2024-12-05 10:04:58.810020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.227 [2024-12-05 10:04:58.810030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.227 [2024-12-05 10:04:58.810043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.227 [2024-12-05 10:04:58.810053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.227 [2024-12-05 10:04:58.810066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.227 [2024-12-05 10:04:58.810076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.227 [2024-12-05 10:04:58.810089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.227 [2024-12-05 10:04:58.810099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.227 [2024-12-05 10:04:58.810111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.227 [2024-12-05 10:04:58.810122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.227 [2024-12-05 10:04:58.810134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:33:11.227 [2024-12-05 10:04:58.810149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:11.227 [2024-12-05 10:04:58.810160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:11.227 [2024-12-05 10:04:58.810177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120336 len:8 PRP1 0x0 PRP2 0x0 00:33:11.227 [2024-12-05 10:04:58.810195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.227 [2024-12-05 10:04:58.810680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:33:11.227 [2024-12-05 10:04:58.810820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:33:11.227 [2024-12-05 10:04:58.810977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.227 [2024-12-05 10:04:58.811014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:33:11.227 [2024-12-05 10:04:58.811029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:33:11.227 [2024-12-05 10:04:58.811053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:33:11.227 [2024-12-05 10:04:58.811076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:33:11.227 [2024-12-05 10:04:58.811089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:33:11.227 [2024-12-05 10:04:58.811102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:33:11.227 [2024-12-05 10:04:58.811123] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:33:11.227 [2024-12-05 10:04:58.811136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:33:11.227 10:04:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 108335 00:33:13.095 9219.00 IOPS, 36.01 MiB/s [2024-12-05T10:05:00.981Z] 6146.00 IOPS, 24.01 MiB/s [2024-12-05T10:05:00.981Z] [2024-12-05 10:05:00.811312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.352 [2024-12-05 10:05:00.811392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:33:13.352 [2024-12-05 10:05:00.811413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:33:13.352 [2024-12-05 10:05:00.811459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:33:13.352 [2024-12-05 10:05:00.811488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:33:13.352 [2024-12-05 10:05:00.811502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:33:13.352 [2024-12-05 10:05:00.811518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:33:13.353 [2024-12-05 10:05:00.811534] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:33:13.353 [2024-12-05 10:05:00.811549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:33:15.219 4609.50 IOPS, 18.01 MiB/s [2024-12-05T10:05:02.848Z] 3687.60 IOPS, 14.40 MiB/s [2024-12-05T10:05:02.848Z] [2024-12-05 10:05:02.811665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.219 [2024-12-05 10:05:02.811716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:33:15.219 [2024-12-05 10:05:02.811733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:33:15.219 [2024-12-05 10:05:02.811756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:33:15.219 [2024-12-05 10:05:02.811778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:33:15.219 [2024-12-05 10:05:02.811790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:33:15.219 [2024-12-05 10:05:02.811803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:33:15.219 [2024-12-05 10:05:02.811815] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:33:15.219 [2024-12-05 10:05:02.811827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:33:17.102 3073.00 IOPS, 12.00 MiB/s [2024-12-05T10:05:04.990Z] 2634.00 IOPS, 10.29 MiB/s [2024-12-05T10:05:04.990Z] [2024-12-05 10:05:04.811875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:33:17.361 [2024-12-05 10:05:04.811920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:33:17.361 [2024-12-05 10:05:04.811959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:33:17.361 [2024-12-05 10:05:04.811972] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:33:17.362 [2024-12-05 10:05:04.811985] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:33:18.304 2304.75 IOPS, 9.00 MiB/s 00:33:18.304 Latency(us) 00:33:18.304 [2024-12-05T10:05:05.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.304 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:33:18.304 NVMe0n1 : 8.18 2255.27 8.81 15.66 0.00 56314.85 2442.71 7046430.72 00:33:18.304 [2024-12-05T10:05:05.933Z] =================================================================================================================== 00:33:18.304 [2024-12-05T10:05:05.933Z] Total : 2255.27 8.81 15.66 0.00 56314.85 2442.71 7046430.72 00:33:18.304 { 00:33:18.304 "results": [ 00:33:18.304 { 00:33:18.304 "job": "NVMe0n1", 00:33:18.304 "core_mask": "0x4", 00:33:18.304 "workload": "randread", 00:33:18.304 "status": "finished", 00:33:18.304 "queue_depth": 128, 00:33:18.304 "io_size": 4096, 00:33:18.304 "runtime": 8.175534, 00:33:18.304 "iops": 2255.2655276095725, 00:33:18.304 "mibps": 8.809630967224892, 00:33:18.304 "io_failed": 128, 00:33:18.304 "io_timeout": 0, 00:33:18.304 "avg_latency_us": 56314.84515233123, 00:33:18.304 "min_latency_us": 2442.7054545454544, 00:33:18.304 "max_latency_us": 7046430.72 00:33:18.304 } 00:33:18.304 ], 00:33:18.304 "core_count": 1 00:33:18.304 } 00:33:18.304 10:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:18.304 Attaching 5 probes... 00:33:18.304 1340.993516: reset bdev controller NVMe0 00:33:18.304 1341.210355: reconnect bdev controller NVMe0 00:33:18.304 3341.500085: reconnect delay bdev controller NVMe0 00:33:18.304 3341.521811: reconnect bdev controller NVMe0 00:33:18.304 5341.918988: reconnect delay bdev controller NVMe0 00:33:18.304 5341.933122: reconnect bdev controller NVMe0 00:33:18.304 7342.179860: reconnect delay bdev controller NVMe0 00:33:18.304 7342.194516: reconnect bdev controller NVMe0 00:33:18.304 10:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:33:18.304 10:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:33:18.304 10:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 108287 00:33:18.304 10:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:18.304 10:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 108259 00:33:18.304 10:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 108259 ']' 00:33:18.304 10:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 108259 00:33:18.304 10:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:33:18.304 10:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:18.304 10:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108259 00:33:18.304 killing process with pid 108259 00:33:18.304 Received shutdown signal, test time was about 8.234125 seconds 00:33:18.304 00:33:18.304 Latency(us) 00:33:18.304 [2024-12-05T10:05:05.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.304 [2024-12-05T10:05:05.933Z] =================================================================================================================== 00:33:18.304 [2024-12-05T10:05:05.934Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:18.305 10:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:18.305 10:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:18.305 10:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108259' 00:33:18.305 10:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 108259 00:33:18.305 10:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 108259 00:33:19.238 10:05:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:19.802 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:33:19.802 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:33:19.802 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:19.802 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:33:19.802 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:19.802 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:33:19.802 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:19.802 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:19.802 rmmod nvme_tcp 00:33:19.802 rmmod nvme_fabrics 00:33:19.802 rmmod nvme_keyring 00:33:19.802 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:19.803 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:33:19.803 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:33:19.803 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 107672 ']' 00:33:19.803 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 107672 00:33:19.803 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 107672 ']' 00:33:19.803 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 107672 00:33:19.803 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:33:19.803 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:19.803 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107672 00:33:19.803 killing process with pid 107672 00:33:19.803 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:19.803 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:19.803 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107672' 00:33:19.803 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 107672 00:33:19.803 10:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 107672 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:33:21.181 00:33:21.181 real 0m50.611s 00:33:21.181 user 2m26.379s 00:33:21.181 sys 0m5.494s 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:21.181 ************************************ 00:33:21.181 END TEST nvmf_timeout 00:33:21.181 ************************************ 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:21.181 00:33:21.181 real 7m8.759s 00:33:21.181 user 19m21.145s 00:33:21.181 sys 1m17.188s 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:21.181 10:05:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.182 ************************************ 00:33:21.182 END TEST nvmf_host 00:33:21.182 ************************************ 00:33:21.182 10:05:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:33:21.182 10:05:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:33:21.182 10:05:08 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:33:21.182 10:05:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:21.182 10:05:08 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:21.182 10:05:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:21.182 ************************************ 00:33:21.182 START TEST nvmf_target_core_interrupt_mode 00:33:21.182 ************************************ 00:33:21.182 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:33:21.442 * Looking for test storage... 00:33:21.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:21.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.442 --rc genhtml_branch_coverage=1 00:33:21.442 --rc genhtml_function_coverage=1 00:33:21.442 --rc genhtml_legend=1 00:33:21.442 --rc geninfo_all_blocks=1 00:33:21.442 --rc geninfo_unexecuted_blocks=1 00:33:21.442 00:33:21.442 ' 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:21.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.442 --rc genhtml_branch_coverage=1 00:33:21.442 --rc genhtml_function_coverage=1 00:33:21.442 --rc genhtml_legend=1 00:33:21.442 --rc geninfo_all_blocks=1 00:33:21.442 --rc geninfo_unexecuted_blocks=1 00:33:21.442 00:33:21.442 ' 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:21.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.442 --rc genhtml_branch_coverage=1 00:33:21.442 --rc genhtml_function_coverage=1 00:33:21.442 --rc genhtml_legend=1 00:33:21.442 --rc geninfo_all_blocks=1 00:33:21.442 --rc geninfo_unexecuted_blocks=1 00:33:21.442 00:33:21.442 ' 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:21.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.442 --rc genhtml_branch_coverage=1 00:33:21.442 --rc genhtml_function_coverage=1 00:33:21.442 --rc genhtml_legend=1 00:33:21.442 --rc geninfo_all_blocks=1 00:33:21.442 --rc geninfo_unexecuted_blocks=1 00:33:21.442 00:33:21.442 ' 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:33:21.442 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:21.443 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:21.443 ************************************ 00:33:21.443 START TEST nvmf_abort 00:33:21.443 ************************************ 00:33:21.443 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:33:21.704 * Looking for test storage... 00:33:21.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:21.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.704 --rc genhtml_branch_coverage=1 00:33:21.704 --rc genhtml_function_coverage=1 00:33:21.704 --rc genhtml_legend=1 00:33:21.704 --rc geninfo_all_blocks=1 00:33:21.704 --rc geninfo_unexecuted_blocks=1 00:33:21.704 00:33:21.704 ' 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:21.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.704 --rc genhtml_branch_coverage=1 00:33:21.704 --rc genhtml_function_coverage=1 00:33:21.704 --rc genhtml_legend=1 00:33:21.704 --rc geninfo_all_blocks=1 00:33:21.704 --rc geninfo_unexecuted_blocks=1 00:33:21.704 00:33:21.704 ' 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:21.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.704 --rc genhtml_branch_coverage=1 00:33:21.704 --rc genhtml_function_coverage=1 00:33:21.704 --rc genhtml_legend=1 00:33:21.704 --rc geninfo_all_blocks=1 00:33:21.704 --rc geninfo_unexecuted_blocks=1 00:33:21.704 00:33:21.704 ' 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:21.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.704 --rc genhtml_branch_coverage=1 00:33:21.704 --rc genhtml_function_coverage=1 00:33:21.704 --rc genhtml_legend=1 00:33:21.704 --rc geninfo_all_blocks=1 00:33:21.704 --rc geninfo_unexecuted_blocks=1 00:33:21.704 00:33:21.704 ' 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:21.704 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:21.705 Cannot find device "nvmf_init_br" 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:21.705 Cannot find device "nvmf_init_br2" 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:21.705 Cannot find device "nvmf_tgt_br" 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:21.705 Cannot find device "nvmf_tgt_br2" 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:21.705 Cannot find device "nvmf_init_br" 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:21.705 Cannot find device "nvmf_init_br2" 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:21.705 Cannot find device "nvmf_tgt_br" 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:21.705 Cannot find device "nvmf_tgt_br2" 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:21.705 Cannot find device "nvmf_br" 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 00:33:21.705 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:21.966 Cannot find device "nvmf_init_if" 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:21.966 Cannot find device "nvmf_init_if2" 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:21.966 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:21.966 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:21.966 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:21.966 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:33:21.966 00:33:21.966 --- 10.0.0.3 ping statistics --- 00:33:21.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.966 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:21.966 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:21.966 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:33:21.966 00:33:21.966 --- 10.0.0.4 ping statistics --- 00:33:21.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.966 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:21.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:21.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:33:21.966 00:33:21.966 --- 10.0.0.1 ping statistics --- 00:33:21.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.966 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:21.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:21.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:33:21.966 00:33:21.966 --- 10.0.0.2 ping statistics --- 00:33:21.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.966 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:21.966 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:22.225 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:33:22.225 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:22.225 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:22.225 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:22.225 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=108770 00:33:22.225 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:33:22.225 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 108770 00:33:22.225 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 108770 ']' 00:33:22.225 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:22.225 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:22.225 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:22.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:22.225 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:22.225 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:22.225 [2024-12-05 10:05:09.709202] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:22.225 [2024-12-05 10:05:09.711567] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:33:22.225 [2024-12-05 10:05:09.711675] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:22.483 [2024-12-05 10:05:09.888701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:22.483 [2024-12-05 10:05:10.063171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:22.483 [2024-12-05 10:05:10.063261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:22.483 [2024-12-05 10:05:10.063284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:22.483 [2024-12-05 10:05:10.063305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:22.484 [2024-12-05 10:05:10.063320] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:22.484 [2024-12-05 10:05:10.066265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:22.484 [2024-12-05 10:05:10.066444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:22.484 [2024-12-05 10:05:10.066445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.051 [2024-12-05 10:05:10.429219] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:23.051 [2024-12-05 10:05:10.430348] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:23.051 [2024-12-05 10:05:10.430551] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:23.051 [2024-12-05 10:05:10.433630] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:23.051 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:23.051 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:33:23.051 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:23.051 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:23.051 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:23.310 [2024-12-05 10:05:10.696534] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:23.310 Malloc0 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:23.310 Delay0 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:23.310 [2024-12-05 10:05:10.840619] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.310 10:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:33:23.570 [2024-12-05 10:05:11.108064] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:26.106 Initializing NVMe Controllers 00:33:26.106 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:33:26.106 controller IO queue size 128 less than required 00:33:26.106 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:33:26.106 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:33:26.106 Initialization complete. Launching workers. 00:33:26.106 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29388 00:33:26.106 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29445, failed to submit 66 00:33:26.106 success 29388, unsuccessful 57, failed 0 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:26.106 rmmod nvme_tcp 00:33:26.106 rmmod nvme_fabrics 00:33:26.106 rmmod nvme_keyring 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 108770 ']' 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 108770 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 108770 ']' 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 108770 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108770 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:26.106 killing process with pid 108770 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108770' 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 108770 00:33:26.106 10:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 108770 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.045 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.304 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:33:27.304 00:33:27.304 real 0m5.695s 00:33:27.304 user 0m11.157s 00:33:27.304 sys 0m1.624s 00:33:27.304 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:27.304 ************************************ 00:33:27.304 END TEST nvmf_abort 00:33:27.304 ************************************ 00:33:27.304 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:27.304 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:33:27.304 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:27.304 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:27.304 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:27.304 ************************************ 00:33:27.304 START TEST nvmf_ns_hotplug_stress 00:33:27.304 ************************************ 00:33:27.304 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:33:27.304 * Looking for test storage... 00:33:27.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:27.304 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:27.304 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:33:27.304 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:33:27.564 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:27.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.565 --rc genhtml_branch_coverage=1 00:33:27.565 --rc genhtml_function_coverage=1 00:33:27.565 --rc genhtml_legend=1 00:33:27.565 --rc geninfo_all_blocks=1 00:33:27.565 --rc geninfo_unexecuted_blocks=1 00:33:27.565 00:33:27.565 ' 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:27.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.565 --rc genhtml_branch_coverage=1 00:33:27.565 --rc genhtml_function_coverage=1 00:33:27.565 --rc genhtml_legend=1 00:33:27.565 --rc geninfo_all_blocks=1 00:33:27.565 --rc geninfo_unexecuted_blocks=1 00:33:27.565 00:33:27.565 ' 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:27.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.565 --rc genhtml_branch_coverage=1 00:33:27.565 --rc genhtml_function_coverage=1 00:33:27.565 --rc genhtml_legend=1 00:33:27.565 --rc geninfo_all_blocks=1 00:33:27.565 --rc geninfo_unexecuted_blocks=1 00:33:27.565 00:33:27.565 ' 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:27.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.565 --rc genhtml_branch_coverage=1 00:33:27.565 --rc genhtml_function_coverage=1 00:33:27.565 --rc genhtml_legend=1 00:33:27.565 --rc geninfo_all_blocks=1 00:33:27.565 --rc geninfo_unexecuted_blocks=1 00:33:27.565 00:33:27.565 ' 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:33:27.565 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:27.566 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:27.566 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:27.566 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:27.566 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:27.566 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:27.566 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:27.566 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:27.566 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:27.566 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:27.566 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:27.566 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:27.566 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:27.566 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:27.566 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:27.566 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:27.566 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:27.566 Cannot find device "nvmf_init_br" 00:33:27.566 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:33:27.566 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:27.566 Cannot find device "nvmf_init_br2" 00:33:27.566 10:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:27.566 Cannot find device "nvmf_tgt_br" 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:27.566 Cannot find device "nvmf_tgt_br2" 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:27.566 Cannot find device "nvmf_init_br" 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:27.566 Cannot find device "nvmf_init_br2" 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:27.566 Cannot find device "nvmf_tgt_br" 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:27.566 Cannot find device "nvmf_tgt_br2" 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:27.566 Cannot find device "nvmf_br" 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:27.566 Cannot find device "nvmf_init_if" 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:27.566 Cannot find device "nvmf_init_if2" 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:27.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:27.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:27.566 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:27.824 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:27.824 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:33:27.824 00:33:27.824 --- 10.0.0.3 ping statistics --- 00:33:27.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.824 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:33:27.824 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:27.824 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:27.824 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:33:27.824 00:33:27.824 --- 10.0.0.4 ping statistics --- 00:33:27.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.825 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:27.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:27.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:33:27.825 00:33:27.825 --- 10.0.0.1 ping statistics --- 00:33:27.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.825 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:27.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:27.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:33:27.825 00:33:27.825 --- 10.0.0.2 ping statistics --- 00:33:27.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.825 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=109096 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 109096 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 109096 ']' 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:27.825 10:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:28.081 [2024-12-05 10:05:15.570655] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:28.081 [2024-12-05 10:05:15.573868] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:33:28.081 [2024-12-05 10:05:15.574017] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:28.339 [2024-12-05 10:05:15.769784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:28.339 [2024-12-05 10:05:15.929607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:28.339 [2024-12-05 10:05:15.929687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:28.339 [2024-12-05 10:05:15.929703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:28.339 [2024-12-05 10:05:15.929717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:28.339 [2024-12-05 10:05:15.929727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:28.339 [2024-12-05 10:05:15.931838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:28.339 [2024-12-05 10:05:15.932004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.339 [2024-12-05 10:05:15.932018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:28.906 [2024-12-05 10:05:16.253112] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:28.906 [2024-12-05 10:05:16.254183] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:28.906 [2024-12-05 10:05:16.254274] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:28.906 [2024-12-05 10:05:16.254557] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:29.165 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:29.165 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:33:29.165 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:29.165 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:29.165 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:29.165 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:29.165 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:33:29.165 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:29.423 [2024-12-05 10:05:16.801456] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:29.423 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:29.682 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:29.940 [2024-12-05 10:05:17.322204] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:29.941 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:33:29.941 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:33:30.199 Malloc0 00:33:30.199 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:30.458 Delay0 00:33:30.716 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:30.716 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:33:31.284 NULL1 00:33:31.284 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:33:31.284 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:33:31.284 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=109227 00:33:31.284 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:31.284 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:31.543 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:31.801 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:33:31.801 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:33:32.060 true 00:33:32.060 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:32.060 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:32.319 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:32.578 10:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:33:32.578 10:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:33:32.850 true 00:33:32.850 10:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:32.850 10:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:33.138 10:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:33.402 10:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:33:33.402 10:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:33:33.661 true 00:33:33.661 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:33.661 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:33.661 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:33.920 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:33:33.920 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:33:34.179 true 00:33:34.179 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:34.179 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:34.439 10:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:34.698 10:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:33:34.698 10:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:33:34.958 true 00:33:34.958 10:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:34.958 10:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:35.217 10:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:35.476 10:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:33:35.476 10:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:33:35.735 true 00:33:35.735 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:35.735 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:35.994 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:36.252 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:33:36.252 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:33:36.252 true 00:33:36.252 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:36.252 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:36.818 10:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:36.818 10:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:33:36.818 10:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:33:37.076 true 00:33:37.076 10:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:37.077 10:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:37.336 10:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:37.595 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:33:37.595 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:33:37.854 true 00:33:37.854 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:37.854 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:38.113 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:38.371 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:33:38.371 10:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:33:38.630 true 00:33:38.630 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:38.630 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:38.889 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:38.889 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:33:38.889 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:33:39.149 true 00:33:39.407 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:39.407 10:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:39.407 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:39.666 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:33:39.666 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:33:39.925 true 00:33:39.925 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:39.925 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:40.184 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:40.443 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:33:40.443 10:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:33:40.702 true 00:33:40.702 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:40.702 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:40.961 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:41.221 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:33:41.221 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:33:41.479 true 00:33:41.479 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:41.480 10:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:41.739 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:41.998 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:33:41.998 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:33:42.257 true 00:33:42.257 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:42.257 10:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:42.516 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:42.774 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:33:42.775 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:33:43.033 true 00:33:43.033 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:43.033 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:43.292 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:43.578 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:33:43.578 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:33:43.578 true 00:33:43.835 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:43.836 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:44.093 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:44.350 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:33:44.350 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:33:44.350 true 00:33:44.607 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:44.607 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:44.607 10:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:44.863 10:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:33:44.863 10:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:33:45.120 true 00:33:45.120 10:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:45.121 10:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:45.378 10:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:45.635 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:33:45.635 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:33:45.893 true 00:33:45.893 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:45.893 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:46.151 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:46.409 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:33:46.409 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:33:46.666 true 00:33:46.667 10:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:46.667 10:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:46.925 10:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:47.183 10:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:33:47.183 10:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:33:47.442 true 00:33:47.442 10:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:47.442 10:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:47.699 10:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:47.957 10:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:33:47.957 10:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:33:48.214 true 00:33:48.214 10:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:48.214 10:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:48.471 10:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:48.728 10:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:33:48.728 10:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:33:48.986 true 00:33:48.986 10:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:48.986 10:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:49.244 10:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:49.502 10:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:33:49.502 10:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:33:49.759 true 00:33:49.759 10:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:49.759 10:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:50.020 10:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:50.277 10:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:33:50.277 10:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:33:50.277 true 00:33:50.534 10:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:50.534 10:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:50.792 10:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:51.049 10:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:33:51.049 10:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:33:51.306 true 00:33:51.306 10:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:51.307 10:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:51.564 10:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:51.822 10:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:33:51.822 10:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:33:52.081 true 00:33:52.081 10:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:52.081 10:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:52.339 10:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:52.597 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:33:52.597 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:33:52.856 true 00:33:52.856 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:52.856 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:53.114 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:53.373 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:33:53.373 10:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:33:53.632 true 00:33:53.632 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:53.632 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:53.891 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:54.149 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:33:54.150 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:33:54.150 true 00:33:54.150 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:54.150 10:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:54.409 10:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:54.669 10:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:33:54.669 10:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:33:54.928 true 00:33:54.928 10:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:54.928 10:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:55.187 10:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:55.446 10:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:33:55.446 10:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:33:55.705 true 00:33:55.705 10:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:55.705 10:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:55.964 10:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:56.223 10:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:33:56.223 10:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:33:56.482 true 00:33:56.482 10:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:56.482 10:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:56.741 10:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:56.999 10:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:33:56.999 10:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:33:57.258 true 00:33:57.258 10:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:57.258 10:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:57.517 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:57.782 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:33:57.782 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:33:58.050 true 00:33:58.050 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:58.050 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:58.326 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:58.603 10:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:33:58.603 10:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:33:58.861 true 00:33:59.120 10:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:59.120 10:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:59.379 10:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:59.639 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:33:59.639 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:33:59.639 true 00:33:59.639 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:33:59.639 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:59.898 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:00.157 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:34:00.157 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:34:00.417 true 00:34:00.417 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:34:00.417 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:00.676 10:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:00.936 10:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:34:00.936 10:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:34:01.195 true 00:34:01.195 10:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:34:01.195 10:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:01.455 10:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:01.455 Initializing NVMe Controllers 00:34:01.455 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:34:01.455 Controller IO queue size 128, less than required. 00:34:01.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:01.455 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:01.455 Initialization complete. Launching workers. 00:34:01.455 ======================================================== 00:34:01.455 Latency(us) 00:34:01.455 Device Information : IOPS MiB/s Average min max 00:34:01.455 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 19964.87 9.75 6411.04 3202.67 18182.04 00:34:01.455 ======================================================== 00:34:01.455 Total : 19964.87 9.75 6411.04 3202.67 18182.04 00:34:01.455 00:34:01.715 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:34:01.715 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:34:01.973 true 00:34:01.973 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 109227 00:34:01.973 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (109227) - No such process 00:34:01.973 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 109227 00:34:01.973 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:02.232 10:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:02.491 10:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:34:02.491 10:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:34:02.491 10:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:34:02.491 10:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:02.491 10:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:34:02.750 null0 00:34:02.750 10:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:02.750 10:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:02.750 10:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:34:03.008 null1 00:34:03.008 10:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:03.008 10:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:03.008 10:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:34:03.267 null2 00:34:03.267 10:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:03.267 10:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:03.267 10:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:34:03.526 null3 00:34:03.526 10:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:03.526 10:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:03.526 10:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:34:03.526 null4 00:34:03.526 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:03.526 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:03.526 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:34:03.785 null5 00:34:03.785 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:03.785 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:03.785 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:34:04.044 null6 00:34:04.044 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:04.044 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:04.044 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:34:04.303 null7 00:34:04.303 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:04.303 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:04.303 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:34:04.303 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:04.303 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:04.303 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:34:04.303 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:04.303 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:34:04.303 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:04.303 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:04.303 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.303 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:04.303 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:04.303 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 110483 110484 110486 110488 110490 110491 110493 110496 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.304 10:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:04.563 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:04.563 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:04.563 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:04.563 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:04.823 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:04.823 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:04.823 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:04.823 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:04.823 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:04.823 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.823 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:04.823 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:04.823 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.823 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:05.082 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:05.082 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:05.082 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:05.082 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:05.082 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:05.082 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:05.082 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:05.082 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:05.082 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:05.082 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:05.082 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:05.082 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:05.082 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:05.082 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:05.082 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:05.082 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:05.082 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:05.082 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:05.082 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:05.342 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:05.342 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:05.342 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:05.342 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:05.342 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:05.342 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:05.342 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:05.342 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:05.342 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:05.342 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:05.602 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:05.602 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:05.602 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:05.602 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:05.602 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:05.602 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:05.602 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:05.602 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:05.602 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:05.602 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:05.602 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:05.602 10:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:05.602 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:05.602 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:05.602 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:05.602 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:05.602 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:05.602 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:05.602 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:05.602 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:05.602 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:05.602 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:05.862 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:05.862 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:05.862 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:05.862 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:05.862 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:05.862 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:05.862 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:06.122 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:06.122 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:06.123 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:06.382 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:06.382 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:06.382 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:06.382 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:06.382 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:06.383 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:06.383 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:06.383 10:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:06.642 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:06.643 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:06.643 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:06.643 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:06.902 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:06.902 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:06.902 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:06.902 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:06.902 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:06.902 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:06.902 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:06.902 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:07.162 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:07.422 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:07.422 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:07.422 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:07.422 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:07.423 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:07.423 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:07.423 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:07.423 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:07.423 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:07.681 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:07.681 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:07.681 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:07.681 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:07.681 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:07.681 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:07.681 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:07.681 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:07.681 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:07.681 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:07.681 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:07.681 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:07.681 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:07.681 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:07.681 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:07.681 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:07.681 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:07.681 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:07.681 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:07.939 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:07.939 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:07.939 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:07.939 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:07.939 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:07.939 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:07.939 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:07.939 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:07.939 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:07.939 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:07.939 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:07.939 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:08.198 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:08.457 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:08.457 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:08.457 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:08.457 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:08.457 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:08.457 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:08.457 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:08.457 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:08.457 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:08.457 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:08.457 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:08.716 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:08.716 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:08.716 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:08.716 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:08.716 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:08.716 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:08.716 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:08.716 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:08.716 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:08.716 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:08.716 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:08.716 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:08.716 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:08.716 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:08.716 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:08.975 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:08.975 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:08.975 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:08.975 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:08.975 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:08.975 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:08.975 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:08.976 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:08.976 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:08.976 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:08.976 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:08.976 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:08.976 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:08.976 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:08.976 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:09.236 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:09.236 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:09.236 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:09.236 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:09.236 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:09.236 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:09.236 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:09.236 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:09.236 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:09.236 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:09.236 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:09.236 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:09.236 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:09.236 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:09.495 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:09.496 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:09.496 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:09.496 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:09.496 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:09.496 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:09.496 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:09.496 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:09.496 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:09.496 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:09.496 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:09.496 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:09.496 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:09.496 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:09.496 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:09.496 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:09.496 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:09.496 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:09.755 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:09.755 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:09.755 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:09.755 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:09.755 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:09.755 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:09.755 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:09.755 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:09.755 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:09.755 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:09.755 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:10.014 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:10.015 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:10.015 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:10.015 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:10.015 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:10.015 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:10.015 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:10.015 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:10.015 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:10.015 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:10.015 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:34:10.015 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:34:10.015 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:10.015 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:34:10.015 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:10.015 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:34:10.015 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:10.015 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:10.015 rmmod nvme_tcp 00:34:10.274 rmmod nvme_fabrics 00:34:10.274 rmmod nvme_keyring 00:34:10.274 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:10.274 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:34:10.274 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:34:10.274 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 109096 ']' 00:34:10.274 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 109096 00:34:10.274 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 109096 ']' 00:34:10.274 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 109096 00:34:10.274 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:34:10.274 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:10.274 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109096 00:34:10.274 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:10.274 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:10.274 killing process with pid 109096 00:34:10.274 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109096' 00:34:10.274 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 109096 00:34:10.274 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 109096 00:34:11.210 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:11.210 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:11.210 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:11.210 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:34:11.210 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:34:11.210 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:34:11.210 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:11.210 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:11.210 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:11.210 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:11.210 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:11.210 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:11.210 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:11.211 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:11.211 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:11.211 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:11.211 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:11.211 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:11.211 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:11.211 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:11.470 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:11.470 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:11.470 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:11.470 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.470 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:11.470 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.470 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:34:11.470 00:34:11.470 real 0m44.174s 00:34:11.470 user 3m12.860s 00:34:11.470 sys 0m18.834s 00:34:11.470 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:11.470 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:11.470 ************************************ 00:34:11.470 END TEST nvmf_ns_hotplug_stress 00:34:11.470 ************************************ 00:34:11.470 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:34:11.470 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:11.470 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:11.470 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:11.470 ************************************ 00:34:11.470 START TEST nvmf_delete_subsystem 00:34:11.470 ************************************ 00:34:11.470 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:34:11.470 * Looking for test storage... 00:34:11.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:11.470 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:11.470 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:34:11.470 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:11.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.730 --rc genhtml_branch_coverage=1 00:34:11.730 --rc genhtml_function_coverage=1 00:34:11.730 --rc genhtml_legend=1 00:34:11.730 --rc geninfo_all_blocks=1 00:34:11.730 --rc geninfo_unexecuted_blocks=1 00:34:11.730 00:34:11.730 ' 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:11.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.730 --rc genhtml_branch_coverage=1 00:34:11.730 --rc genhtml_function_coverage=1 00:34:11.730 --rc genhtml_legend=1 00:34:11.730 --rc geninfo_all_blocks=1 00:34:11.730 --rc geninfo_unexecuted_blocks=1 00:34:11.730 00:34:11.730 ' 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:11.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.730 --rc genhtml_branch_coverage=1 00:34:11.730 --rc genhtml_function_coverage=1 00:34:11.730 --rc genhtml_legend=1 00:34:11.730 --rc geninfo_all_blocks=1 00:34:11.730 --rc geninfo_unexecuted_blocks=1 00:34:11.730 00:34:11.730 ' 00:34:11.730 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:11.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.731 --rc genhtml_branch_coverage=1 00:34:11.731 --rc genhtml_function_coverage=1 00:34:11.731 --rc genhtml_legend=1 00:34:11.731 --rc geninfo_all_blocks=1 00:34:11.731 --rc geninfo_unexecuted_blocks=1 00:34:11.731 00:34:11.731 ' 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:11.731 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:11.732 Cannot find device "nvmf_init_br" 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:11.732 Cannot find device "nvmf_init_br2" 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:11.732 Cannot find device "nvmf_tgt_br" 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:11.732 Cannot find device "nvmf_tgt_br2" 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:11.732 Cannot find device "nvmf_init_br" 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:11.732 Cannot find device "nvmf_init_br2" 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:11.732 Cannot find device "nvmf_tgt_br" 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:11.732 Cannot find device "nvmf_tgt_br2" 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:11.732 Cannot find device "nvmf_br" 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:11.732 Cannot find device "nvmf_init_if" 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:34:11.732 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:11.992 Cannot find device "nvmf_init_if2" 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:11.992 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:11.992 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:11.992 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:11.993 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:11.993 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:34:11.993 00:34:11.993 --- 10.0.0.3 ping statistics --- 00:34:11.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.993 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:11.993 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:11.993 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:34:11.993 00:34:11.993 --- 10.0.0.4 ping statistics --- 00:34:11.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.993 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:11.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:11.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:34:11.993 00:34:11.993 --- 10.0.0.1 ping statistics --- 00:34:11.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.993 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:11.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:11.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:34:11.993 00:34:11.993 --- 10.0.0.2 ping statistics --- 00:34:11.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.993 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=111863 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 111863 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 111863 ']' 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.993 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:12.252 [2024-12-05 10:05:59.747865] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:12.252 [2024-12-05 10:05:59.751150] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:34:12.252 [2024-12-05 10:05:59.751278] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.512 [2024-12-05 10:05:59.948734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:12.512 [2024-12-05 10:06:00.108395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:12.512 [2024-12-05 10:06:00.108504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:12.512 [2024-12-05 10:06:00.108543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:12.512 [2024-12-05 10:06:00.108565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:12.512 [2024-12-05 10:06:00.108580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:12.512 [2024-12-05 10:06:00.111350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:12.512 [2024-12-05 10:06:00.111355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:13.079 [2024-12-05 10:06:00.445045] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:13.079 [2024-12-05 10:06:00.445264] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:13.079 [2024-12-05 10:06:00.445561] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:13.079 [2024-12-05 10:06:00.676884] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:13.079 [2024-12-05 10:06:00.701386] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.079 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:13.337 NULL1 00:34:13.337 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.338 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:13.338 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.338 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:13.338 Delay0 00:34:13.338 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.338 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:13.338 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.338 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:13.338 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.338 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=111913 00:34:13.338 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:34:13.338 10:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:34:13.338 [2024-12-05 10:06:00.952524] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:34:15.236 10:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:15.236 10:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.236 10:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 starting I/O failed: -6 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 starting I/O failed: -6 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 starting I/O failed: -6 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 starting I/O failed: -6 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 starting I/O failed: -6 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 starting I/O failed: -6 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 starting I/O failed: -6 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 starting I/O failed: -6 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 starting I/O failed: -6 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 starting I/O failed: -6 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 starting I/O failed: -6 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 starting I/O failed: -6 00:34:15.494 [2024-12-05 10:06:03.016608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000010480 is same with the state(6) to be set 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Write completed with error (sct=0, sc=8) 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 [2024-12-05 10:06:03.019840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000ff80 is same with the state(6) to be set 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.494 starting I/O failed: -6 00:34:15.494 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 starting I/O failed: -6 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 starting I/O failed: -6 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 starting I/O failed: -6 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 starting I/O failed: -6 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 starting I/O failed: -6 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 starting I/O failed: -6 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 starting I/O failed: -6 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 starting I/O failed: -6 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 starting I/O failed: -6 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 starting I/O failed: -6 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 [2024-12-05 10:06:03.020780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000fd00 is same with the state(6) to be set 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Write completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:15.495 Read completed with error (sct=0, sc=8) 00:34:16.427 [2024-12-05 10:06:03.972817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000f300 is same with the state(6) to be set 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 [2024-12-05 10:06:04.019506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000010200 is same with the state(6) to be set 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 [2024-12-05 10:06:04.020191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000010700 is same with the state(6) to be set 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 [2024-12-05 10:06:04.021828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000fa80 is same with the state(6) to be set 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Read completed with error (sct=0, sc=8) 00:34:16.427 Write completed with error (sct=0, sc=8) 00:34:16.428 Write completed with error (sct=0, sc=8) 00:34:16.428 Read completed with error (sct=0, sc=8) 00:34:16.428 Write completed with error (sct=0, sc=8) 00:34:16.428 Read completed with error (sct=0, sc=8) 00:34:16.428 Write completed with error (sct=0, sc=8) 00:34:16.428 Read completed with error (sct=0, sc=8) 00:34:16.428 Read completed with error (sct=0, sc=8) 00:34:16.428 Write completed with error (sct=0, sc=8) 00:34:16.428 Read completed with error (sct=0, sc=8) 00:34:16.428 Read completed with error (sct=0, sc=8) 00:34:16.428 Read completed with error (sct=0, sc=8) 00:34:16.428 Write completed with error (sct=0, sc=8) 00:34:16.428 Read completed with error (sct=0, sc=8) 00:34:16.428 Read completed with error (sct=0, sc=8) 00:34:16.428 Read completed with error (sct=0, sc=8) 00:34:16.428 [2024-12-05 10:06:04.025172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:34:16.428 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.428 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:34:16.428 Initializing NVMe Controllers 00:34:16.428 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:34:16.428 Controller IO queue size 128, less than required. 00:34:16.428 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:16.428 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:16.428 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:16.428 Initialization complete. Launching workers. 00:34:16.428 ======================================================== 00:34:16.428 Latency(us) 00:34:16.428 Device Information : IOPS MiB/s Average min max 00:34:16.428 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.05 0.08 918334.06 647.37 1022247.25 00:34:16.428 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.46 0.08 897714.82 2121.98 1017787.37 00:34:16.428 ======================================================== 00:34:16.428 Total : 331.52 0.16 907793.97 647.37 1022247.25 00:34:16.428 00:34:16.428 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 111913 00:34:16.428 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:34:16.428 [2024-12-05 10:06:04.026892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500000f300 (9): Bad file descriptor 00:34:16.428 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 111913 00:34:16.992 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (111913) - No such process 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 111913 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 111913 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 111913 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:16.992 [2024-12-05 10:06:04.553659] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:16.992 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.993 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=111956 00:34:16.993 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:34:16.993 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:34:16.993 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 111956 00:34:16.993 10:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:17.250 [2024-12-05 10:06:04.798199] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:34:17.508 10:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:17.508 10:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 111956 00:34:17.508 10:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:18.073 10:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:18.073 10:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 111956 00:34:18.073 10:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:18.638 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:18.638 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 111956 00:34:18.638 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:19.205 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:19.205 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 111956 00:34:19.205 10:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:19.464 10:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:19.464 10:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 111956 00:34:19.464 10:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:20.031 10:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:20.031 10:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 111956 00:34:20.031 10:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:20.290 Initializing NVMe Controllers 00:34:20.290 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:34:20.290 Controller IO queue size 128, less than required. 00:34:20.290 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:20.290 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:20.290 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:20.290 Initialization complete. Launching workers. 00:34:20.290 ======================================================== 00:34:20.290 Latency(us) 00:34:20.290 Device Information : IOPS MiB/s Average min max 00:34:20.290 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005104.13 1000246.81 1042089.67 00:34:20.290 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1007479.16 1000242.74 1014400.98 00:34:20.290 ======================================================== 00:34:20.290 Total : 256.00 0.12 1006291.64 1000242.74 1042089.67 00:34:20.290 00:34:20.549 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:20.549 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 111956 00:34:20.549 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (111956) - No such process 00:34:20.549 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 111956 00:34:20.550 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:34:20.550 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:34:20.550 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:20.550 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:34:20.550 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:20.550 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:34:20.550 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:20.550 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:20.550 rmmod nvme_tcp 00:34:20.550 rmmod nvme_fabrics 00:34:20.809 rmmod nvme_keyring 00:34:20.809 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:20.809 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:34:20.809 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:34:20.809 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 111863 ']' 00:34:20.809 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 111863 00:34:20.809 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 111863 ']' 00:34:20.809 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 111863 00:34:20.809 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:34:20.809 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:20.809 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111863 00:34:20.809 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:20.809 killing process with pid 111863 00:34:20.809 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:20.809 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111863' 00:34:20.809 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 111863 00:34:20.809 10:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 111863 00:34:21.749 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:21.749 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:21.749 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:21.749 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:34:21.749 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:34:21.749 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:21.749 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:34:21.749 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:21.749 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:21.749 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:21.749 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:21.749 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:21.749 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:21.749 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:21.749 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:21.749 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:21.749 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:21.749 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:21.749 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:22.009 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:22.009 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:22.009 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:22.009 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:22.009 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.009 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:22.009 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.009 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:34:22.009 00:34:22.009 real 0m10.489s 00:34:22.009 user 0m26.245s 00:34:22.009 sys 0m2.083s 00:34:22.009 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:22.009 ************************************ 00:34:22.009 END TEST nvmf_delete_subsystem 00:34:22.009 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:22.009 ************************************ 00:34:22.009 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:34:22.009 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:22.009 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:22.009 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:22.009 ************************************ 00:34:22.009 START TEST nvmf_host_management 00:34:22.009 ************************************ 00:34:22.009 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:34:22.009 * Looking for test storage... 00:34:22.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:22.269 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:22.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.270 --rc genhtml_branch_coverage=1 00:34:22.270 --rc genhtml_function_coverage=1 00:34:22.270 --rc genhtml_legend=1 00:34:22.270 --rc geninfo_all_blocks=1 00:34:22.270 --rc geninfo_unexecuted_blocks=1 00:34:22.270 00:34:22.270 ' 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:22.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.270 --rc genhtml_branch_coverage=1 00:34:22.270 --rc genhtml_function_coverage=1 00:34:22.270 --rc genhtml_legend=1 00:34:22.270 --rc geninfo_all_blocks=1 00:34:22.270 --rc geninfo_unexecuted_blocks=1 00:34:22.270 00:34:22.270 ' 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:22.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.270 --rc genhtml_branch_coverage=1 00:34:22.270 --rc genhtml_function_coverage=1 00:34:22.270 --rc genhtml_legend=1 00:34:22.270 --rc geninfo_all_blocks=1 00:34:22.270 --rc geninfo_unexecuted_blocks=1 00:34:22.270 00:34:22.270 ' 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:22.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.270 --rc genhtml_branch_coverage=1 00:34:22.270 --rc genhtml_function_coverage=1 00:34:22.270 --rc genhtml_legend=1 00:34:22.270 --rc geninfo_all_blocks=1 00:34:22.270 --rc geninfo_unexecuted_blocks=1 00:34:22.270 00:34:22.270 ' 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:22.270 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:22.271 Cannot find device "nvmf_init_br" 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:22.271 Cannot find device "nvmf_init_br2" 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:22.271 Cannot find device "nvmf_tgt_br" 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:22.271 Cannot find device "nvmf_tgt_br2" 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:22.271 Cannot find device "nvmf_init_br" 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:22.271 Cannot find device "nvmf_init_br2" 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:22.271 Cannot find device "nvmf_tgt_br" 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:22.271 Cannot find device "nvmf_tgt_br2" 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:22.271 Cannot find device "nvmf_br" 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:34:22.271 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:22.531 Cannot find device "nvmf_init_if" 00:34:22.531 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:34:22.531 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:22.531 Cannot find device "nvmf_init_if2" 00:34:22.531 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:34:22.531 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:22.531 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:22.531 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:34:22.531 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:22.531 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:22.531 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:34:22.531 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:22.531 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:22.531 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:22.531 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:22.531 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:22.531 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:22.531 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:22.531 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:22.531 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:22.531 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:22.531 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:22.531 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:22.531 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:22.531 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:22.531 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:22.531 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:22.531 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:22.531 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:22.531 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:22.531 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:22.531 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:22.531 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:22.531 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:22.531 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:22.531 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:22.790 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:22.790 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:22.790 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:22.790 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:22.790 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:22.791 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:22.791 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:34:22.791 00:34:22.791 --- 10.0.0.3 ping statistics --- 00:34:22.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.791 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:22.791 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:22.791 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:34:22.791 00:34:22.791 --- 10.0.0.4 ping statistics --- 00:34:22.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.791 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:22.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:22.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:34:22.791 00:34:22.791 --- 10.0.0.1 ping statistics --- 00:34:22.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.791 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:22.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:22.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:34:22.791 00:34:22.791 --- 10.0.0.2 ping statistics --- 00:34:22.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.791 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=112260 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 112260 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 112260 ']' 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.791 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:22.791 [2024-12-05 10:06:10.352363] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:22.791 [2024-12-05 10:06:10.355525] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:34:22.791 [2024-12-05 10:06:10.355660] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:23.050 [2024-12-05 10:06:10.554300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:23.309 [2024-12-05 10:06:10.708410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:23.309 [2024-12-05 10:06:10.708496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:23.309 [2024-12-05 10:06:10.708519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:23.309 [2024-12-05 10:06:10.708540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:23.309 [2024-12-05 10:06:10.708556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:23.309 [2024-12-05 10:06:10.711289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:23.309 [2024-12-05 10:06:10.711465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:23.309 [2024-12-05 10:06:10.711590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:23.309 [2024-12-05 10:06:10.711600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:23.568 [2024-12-05 10:06:11.025676] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:23.568 [2024-12-05 10:06:11.027071] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:23.568 [2024-12-05 10:06:11.027619] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:23.568 [2024-12-05 10:06:11.027801] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:23.568 [2024-12-05 10:06:11.028080] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:23.827 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:23.827 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:34:23.827 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:23.827 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:23.827 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:23.827 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:23.827 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:23.827 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.827 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:23.827 [2024-12-05 10:06:11.376991] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:23.827 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.827 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:34:23.827 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:23.827 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:23.827 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:34:23.827 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:34:23.827 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:34:23.827 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.827 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:24.101 Malloc0 00:34:24.101 [2024-12-05 10:06:11.505151] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=112328 00:34:24.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 112328 /var/tmp/bdevperf.sock 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 112328 ']' 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:24.101 { 00:34:24.101 "params": { 00:34:24.101 "name": "Nvme$subsystem", 00:34:24.101 "trtype": "$TEST_TRANSPORT", 00:34:24.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:24.101 "adrfam": "ipv4", 00:34:24.101 "trsvcid": "$NVMF_PORT", 00:34:24.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:24.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:24.101 "hdgst": ${hdgst:-false}, 00:34:24.101 "ddgst": ${ddgst:-false} 00:34:24.101 }, 00:34:24.101 "method": "bdev_nvme_attach_controller" 00:34:24.101 } 00:34:24.101 EOF 00:34:24.101 )") 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:34:24.101 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:24.101 "params": { 00:34:24.101 "name": "Nvme0", 00:34:24.101 "trtype": "tcp", 00:34:24.101 "traddr": "10.0.0.3", 00:34:24.101 "adrfam": "ipv4", 00:34:24.101 "trsvcid": "4420", 00:34:24.101 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:24.101 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:24.101 "hdgst": false, 00:34:24.101 "ddgst": false 00:34:24.101 }, 00:34:24.101 "method": "bdev_nvme_attach_controller" 00:34:24.101 }' 00:34:24.102 [2024-12-05 10:06:11.679070] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:34:24.102 [2024-12-05 10:06:11.679223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112328 ] 00:34:24.412 [2024-12-05 10:06:11.871684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:24.671 [2024-12-05 10:06:12.034131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:24.931 Running I/O for 10 seconds... 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.192 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:25.192 [2024-12-05 10:06:12.769390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.192 [2024-12-05 10:06:12.769455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.192 [2024-12-05 10:06:12.769486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.192 [2024-12-05 10:06:12.769499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.192 [2024-12-05 10:06:12.769513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.192 [2024-12-05 10:06:12.769525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.192 [2024-12-05 10:06:12.769538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.192 [2024-12-05 10:06:12.769549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.769562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.769573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.769586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.769597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.769610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.769622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.769635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.769646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.769660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.769671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.769685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.769696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.769709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.769720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.769733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.769745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.769758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.769777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.769790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.769802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.769815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.769827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.769840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.769851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.769865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.769877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.769891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.769902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.769915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.769926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.769971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.769986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.769999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.193 [2024-12-05 10:06:12.770657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.193 [2024-12-05 10:06:12.770669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.770707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.770731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.770775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.770787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.770816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.770827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.770840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.770851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.770864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.770874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.770887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.770897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.770911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.770922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.770946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.770981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.770996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.771007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.771020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.771032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.771045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.771056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.771083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.771098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.771112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.771123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.771137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.771148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.771161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.771172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.771185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.771197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.771210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.771221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.771235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.771246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.771261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.771294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.771323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.771334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.771346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.771358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.771370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:25.194 [2024-12-05 10:06:12.771381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.771741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:25.194 [2024-12-05 10:06:12.771763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.771778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:25.194 [2024-12-05 10:06:12.771789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.771803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:25.194 [2024-12-05 10:06:12.771815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.771827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:25.194 [2024-12-05 10:06:12.771837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.194 [2024-12-05 10:06:12.771848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:34:25.194 [2024-12-05 10:06:12.772954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:25.194 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.194 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:34:25.194 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.194 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:25.194 task offset: 62720 on job bdev=Nvme0n1 fails 00:34:25.194 00:34:25.194 Latency(us) 00:34:25.194 [2024-12-05T10:06:12.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:25.194 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:25.194 Job: Nvme0n1 ended in about 0.33 seconds with error 00:34:25.194 Verification LBA range: start 0x0 length 0x400 00:34:25.194 Nvme0n1 : 0.33 1351.41 84.46 193.06 0.00 40134.65 3351.27 36700.16 00:34:25.194 [2024-12-05T10:06:12.823Z] =================================================================================================================== 00:34:25.194 [2024-12-05T10:06:12.823Z] Total : 1351.41 84.46 193.06 0.00 40134.65 3351.27 36700.16 00:34:25.194 [2024-12-05 10:06:12.777300] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:25.194 [2024-12-05 10:06:12.777358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:34:25.194 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.194 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:34:25.194 [2024-12-05 10:06:12.782969] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:34:26.580 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 112328 00:34:26.580 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (112328) - No such process 00:34:26.580 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:34:26.580 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:34:26.580 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:34:26.580 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:34:26.580 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:34:26.580 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:34:26.580 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:26.580 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:26.580 { 00:34:26.580 "params": { 00:34:26.580 "name": "Nvme$subsystem", 00:34:26.580 "trtype": "$TEST_TRANSPORT", 00:34:26.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.580 "adrfam": "ipv4", 00:34:26.580 "trsvcid": "$NVMF_PORT", 00:34:26.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.580 "hdgst": ${hdgst:-false}, 00:34:26.580 "ddgst": ${ddgst:-false} 00:34:26.580 }, 00:34:26.580 "method": "bdev_nvme_attach_controller" 00:34:26.580 } 00:34:26.580 EOF 00:34:26.580 )") 00:34:26.580 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:34:26.580 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:34:26.580 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:34:26.580 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:26.580 "params": { 00:34:26.580 "name": "Nvme0", 00:34:26.580 "trtype": "tcp", 00:34:26.580 "traddr": "10.0.0.3", 00:34:26.580 "adrfam": "ipv4", 00:34:26.580 "trsvcid": "4420", 00:34:26.580 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:26.580 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:26.580 "hdgst": false, 00:34:26.580 "ddgst": false 00:34:26.580 }, 00:34:26.580 "method": "bdev_nvme_attach_controller" 00:34:26.580 }' 00:34:26.580 [2024-12-05 10:06:13.907424] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:34:26.580 [2024-12-05 10:06:13.907602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112379 ] 00:34:26.580 [2024-12-05 10:06:14.090012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:26.580 [2024-12-05 10:06:14.206088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.149 Running I/O for 1 seconds... 00:34:28.086 1536.00 IOPS, 96.00 MiB/s 00:34:28.086 Latency(us) 00:34:28.086 [2024-12-05T10:06:15.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:28.086 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:28.086 Verification LBA range: start 0x0 length 0x400 00:34:28.086 Nvme0n1 : 1.02 1566.02 97.88 0.00 0.00 40210.48 7030.23 36461.85 00:34:28.086 [2024-12-05T10:06:15.715Z] =================================================================================================================== 00:34:28.086 [2024-12-05T10:06:15.715Z] Total : 1566.02 97.88 0.00 0.00 40210.48 7030.23 36461.85 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:29.463 rmmod nvme_tcp 00:34:29.463 rmmod nvme_fabrics 00:34:29.463 rmmod nvme_keyring 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 112260 ']' 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 112260 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 112260 ']' 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 112260 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112260 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:29.463 killing process with pid 112260 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112260' 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 112260 00:34:29.463 10:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 112260 00:34:30.399 [2024-12-05 10:06:17.808352] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:34:30.399 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:30.399 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:30.399 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:30.399 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:34:30.399 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:30.399 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:34:30.399 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:34:30.399 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:30.399 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:30.399 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:30.399 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:30.399 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:30.399 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:30.399 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:30.399 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:30.399 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:30.399 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:30.399 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:30.658 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:30.658 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:30.658 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:30.658 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:30.658 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:30.658 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.658 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:30.658 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.658 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:34:30.658 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:34:30.658 00:34:30.658 real 0m8.598s 00:34:30.658 user 0m26.954s 00:34:30.658 sys 0m3.129s 00:34:30.658 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:30.658 ************************************ 00:34:30.658 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:30.658 END TEST nvmf_host_management 00:34:30.658 ************************************ 00:34:30.658 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:34:30.658 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:30.659 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:30.659 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:30.659 ************************************ 00:34:30.659 START TEST nvmf_lvol 00:34:30.659 ************************************ 00:34:30.659 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:34:30.659 * Looking for test storage... 00:34:30.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:30.918 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:30.918 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:34:30.918 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:30.918 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:30.918 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:30.918 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:30.918 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:30.918 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:34:30.918 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:34:30.918 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:34:30.918 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:34:30.918 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:34:30.918 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:34:30.918 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:34:30.918 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:30.918 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:34:30.918 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:34:30.918 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:30.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.919 --rc genhtml_branch_coverage=1 00:34:30.919 --rc genhtml_function_coverage=1 00:34:30.919 --rc genhtml_legend=1 00:34:30.919 --rc geninfo_all_blocks=1 00:34:30.919 --rc geninfo_unexecuted_blocks=1 00:34:30.919 00:34:30.919 ' 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:30.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.919 --rc genhtml_branch_coverage=1 00:34:30.919 --rc genhtml_function_coverage=1 00:34:30.919 --rc genhtml_legend=1 00:34:30.919 --rc geninfo_all_blocks=1 00:34:30.919 --rc geninfo_unexecuted_blocks=1 00:34:30.919 00:34:30.919 ' 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:30.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.919 --rc genhtml_branch_coverage=1 00:34:30.919 --rc genhtml_function_coverage=1 00:34:30.919 --rc genhtml_legend=1 00:34:30.919 --rc geninfo_all_blocks=1 00:34:30.919 --rc geninfo_unexecuted_blocks=1 00:34:30.919 00:34:30.919 ' 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:30.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.919 --rc genhtml_branch_coverage=1 00:34:30.919 --rc genhtml_function_coverage=1 00:34:30.919 --rc genhtml_legend=1 00:34:30.919 --rc geninfo_all_blocks=1 00:34:30.919 --rc geninfo_unexecuted_blocks=1 00:34:30.919 00:34:30.919 ' 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:30.919 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:30.920 Cannot find device "nvmf_init_br" 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:30.920 Cannot find device "nvmf_init_br2" 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:30.920 Cannot find device "nvmf_tgt_br" 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:30.920 Cannot find device "nvmf_tgt_br2" 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:30.920 Cannot find device "nvmf_init_br" 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:30.920 Cannot find device "nvmf_init_br2" 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:30.920 Cannot find device "nvmf_tgt_br" 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:30.920 Cannot find device "nvmf_tgt_br2" 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:34:30.920 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:31.178 Cannot find device "nvmf_br" 00:34:31.178 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:34:31.178 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:31.178 Cannot find device "nvmf_init_if" 00:34:31.178 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:31.179 Cannot find device "nvmf_init_if2" 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:31.179 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:31.179 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:31.179 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:31.438 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:31.438 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.121 ms 00:34:31.438 00:34:31.438 --- 10.0.0.3 ping statistics --- 00:34:31.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.438 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:31.438 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:31.438 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:34:31.438 00:34:31.438 --- 10.0.0.4 ping statistics --- 00:34:31.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.438 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:31.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:31.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:34:31.438 00:34:31.438 --- 10.0.0.1 ping statistics --- 00:34:31.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.438 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:31.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:31.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:34:31.438 00:34:31.438 --- 10.0.0.2 ping statistics --- 00:34:31.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.438 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:31.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=112664 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 112664 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 112664 ']' 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:31.438 10:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:31.438 [2024-12-05 10:06:19.017564] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:31.438 [2024-12-05 10:06:19.020950] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:34:31.438 [2024-12-05 10:06:19.021240] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:31.697 [2024-12-05 10:06:19.216116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:31.957 [2024-12-05 10:06:19.382167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:31.957 [2024-12-05 10:06:19.382535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:31.957 [2024-12-05 10:06:19.382757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:31.957 [2024-12-05 10:06:19.382895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:31.957 [2024-12-05 10:06:19.382975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:31.957 [2024-12-05 10:06:19.385351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:31.957 [2024-12-05 10:06:19.385496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:31.957 [2024-12-05 10:06:19.385503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:32.216 [2024-12-05 10:06:19.708745] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:32.216 [2024-12-05 10:06:19.709587] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:32.216 [2024-12-05 10:06:19.709852] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:32.216 [2024-12-05 10:06:19.710195] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:32.475 10:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:32.475 10:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:34:32.475 10:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:32.475 10:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:32.475 10:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:32.475 10:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:32.475 10:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:32.733 [2024-12-05 10:06:20.243177] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:32.733 10:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:33.300 10:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:34:33.300 10:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:33.558 10:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:34:33.558 10:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:34:33.816 10:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:34:34.074 10:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5f369575-ec8f-422d-83cd-55188369ce2b 00:34:34.074 10:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5f369575-ec8f-422d-83cd-55188369ce2b lvol 20 00:34:34.333 10:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d601f636-497f-4fd5-9d84-84534663f1ba 00:34:34.333 10:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:34.591 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d601f636-497f-4fd5-9d84-84534663f1ba 00:34:34.849 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:34:35.107 [2024-12-05 10:06:22.595125] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:35.107 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:34:35.365 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=112808 00:34:35.365 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:34:35.365 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:34:36.299 10:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot d601f636-497f-4fd5-9d84-84534663f1ba MY_SNAPSHOT 00:34:36.558 10:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3326142f-ea41-4fe8-9f6c-1d6081e589e3 00:34:36.558 10:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize d601f636-497f-4fd5-9d84-84534663f1ba 30 00:34:37.125 10:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 3326142f-ea41-4fe8-9f6c-1d6081e589e3 MY_CLONE 00:34:37.384 10:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a625e56d-087a-4921-b648-fda1c72e4135 00:34:37.384 10:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate a625e56d-087a-4921-b648-fda1c72e4135 00:34:37.958 10:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 112808 00:34:46.081 Initializing NVMe Controllers 00:34:46.081 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:34:46.081 Controller IO queue size 128, less than required. 00:34:46.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:46.081 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:34:46.081 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:34:46.081 Initialization complete. Launching workers. 00:34:46.081 ======================================================== 00:34:46.081 Latency(us) 00:34:46.081 Device Information : IOPS MiB/s Average min max 00:34:46.081 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7588.50 29.64 16870.79 687.02 131014.32 00:34:46.081 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7278.30 28.43 17594.09 4377.89 142961.74 00:34:46.081 ======================================================== 00:34:46.081 Total : 14866.80 58.07 17224.90 687.02 142961.74 00:34:46.081 00:34:46.081 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:46.081 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d601f636-497f-4fd5-9d84-84534663f1ba 00:34:46.340 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5f369575-ec8f-422d-83cd-55188369ce2b 00:34:46.599 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:34:46.599 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:34:46.599 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:34:46.599 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:46.599 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:34:46.599 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:46.599 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:34:46.599 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:46.600 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:46.600 rmmod nvme_tcp 00:34:46.600 rmmod nvme_fabrics 00:34:46.600 rmmod nvme_keyring 00:34:46.600 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:46.600 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:34:46.600 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:34:46.600 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 112664 ']' 00:34:46.600 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 112664 00:34:46.600 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 112664 ']' 00:34:46.600 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 112664 00:34:46.600 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:34:46.600 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:46.600 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112664 00:34:46.600 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:46.600 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:46.600 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112664' 00:34:46.600 killing process with pid 112664 00:34:46.600 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 112664 00:34:46.600 10:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 112664 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:34:47.976 00:34:47.976 real 0m17.366s 00:34:47.976 user 0m58.113s 00:34:47.976 sys 0m5.327s 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:47.976 ************************************ 00:34:47.976 END TEST nvmf_lvol 00:34:47.976 ************************************ 00:34:47.976 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:48.234 ************************************ 00:34:48.234 START TEST nvmf_lvs_grow 00:34:48.234 ************************************ 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:48.234 * Looking for test storage... 00:34:48.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:48.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.234 --rc genhtml_branch_coverage=1 00:34:48.234 --rc genhtml_function_coverage=1 00:34:48.234 --rc genhtml_legend=1 00:34:48.234 --rc geninfo_all_blocks=1 00:34:48.234 --rc geninfo_unexecuted_blocks=1 00:34:48.234 00:34:48.234 ' 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:48.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.234 --rc genhtml_branch_coverage=1 00:34:48.234 --rc genhtml_function_coverage=1 00:34:48.234 --rc genhtml_legend=1 00:34:48.234 --rc geninfo_all_blocks=1 00:34:48.234 --rc geninfo_unexecuted_blocks=1 00:34:48.234 00:34:48.234 ' 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:48.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.234 --rc genhtml_branch_coverage=1 00:34:48.234 --rc genhtml_function_coverage=1 00:34:48.234 --rc genhtml_legend=1 00:34:48.234 --rc geninfo_all_blocks=1 00:34:48.234 --rc geninfo_unexecuted_blocks=1 00:34:48.234 00:34:48.234 ' 00:34:48.234 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:48.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.234 --rc genhtml_branch_coverage=1 00:34:48.234 --rc genhtml_function_coverage=1 00:34:48.234 --rc genhtml_legend=1 00:34:48.234 --rc geninfo_all_blocks=1 00:34:48.234 --rc geninfo_unexecuted_blocks=1 00:34:48.234 00:34:48.235 ' 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:48.235 Cannot find device "nvmf_init_br" 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:34:48.235 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:48.494 Cannot find device "nvmf_init_br2" 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:48.494 Cannot find device "nvmf_tgt_br" 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:48.494 Cannot find device "nvmf_tgt_br2" 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:48.494 Cannot find device "nvmf_init_br" 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:48.494 Cannot find device "nvmf_init_br2" 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:48.494 Cannot find device "nvmf_tgt_br" 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:48.494 Cannot find device "nvmf_tgt_br2" 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:48.494 Cannot find device "nvmf_br" 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:48.494 Cannot find device "nvmf_init_if" 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:48.494 Cannot find device "nvmf_init_if2" 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:48.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:48.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:48.494 10:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:48.494 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:48.753 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:48.753 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:48.753 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:48.753 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:48.753 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:48.753 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:48.753 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:48.753 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:48.753 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:48.753 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:48.753 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:48.753 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:34:48.753 00:34:48.753 --- 10.0.0.3 ping statistics --- 00:34:48.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.753 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:34:48.753 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:48.753 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:48.753 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:34:48.753 00:34:48.753 --- 10.0.0.4 ping statistics --- 00:34:48.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.753 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:34:48.753 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:48.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:48.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:34:48.753 00:34:48.753 --- 10.0.0.1 ping statistics --- 00:34:48.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.753 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:34:48.753 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:48.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:48.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:34:48.753 00:34:48.753 --- 10.0.0.2 ping statistics --- 00:34:48.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.753 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:34:48.753 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=113232 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 113232 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 113232 ']' 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:48.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:48.754 10:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:48.754 [2024-12-05 10:06:36.356218] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:48.754 [2024-12-05 10:06:36.359357] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:34:48.754 [2024-12-05 10:06:36.359484] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:49.012 [2024-12-05 10:06:36.556043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:49.270 [2024-12-05 10:06:36.700966] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:49.270 [2024-12-05 10:06:36.701036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:49.270 [2024-12-05 10:06:36.701068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:49.270 [2024-12-05 10:06:36.701084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:49.270 [2024-12-05 10:06:36.701094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:49.270 [2024-12-05 10:06:36.702169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:49.527 [2024-12-05 10:06:36.994884] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:49.527 [2024-12-05 10:06:36.995269] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:49.785 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:49.785 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:34:49.785 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:49.785 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:49.785 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:50.043 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:50.043 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:50.300 [2024-12-05 10:06:37.711498] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:50.300 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:34:50.300 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:50.300 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:50.300 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:50.300 ************************************ 00:34:50.300 START TEST lvs_grow_clean 00:34:50.300 ************************************ 00:34:50.300 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:34:50.300 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:50.300 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:50.300 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:50.300 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:50.300 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:50.300 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:50.300 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:34:50.300 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:34:50.300 10:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:50.558 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:50.558 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:50.816 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d69ecbfd-a650-4ef5-a1d7-19a8692ec5cb 00:34:50.816 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d69ecbfd-a650-4ef5-a1d7-19a8692ec5cb 00:34:50.816 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:51.380 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:51.380 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:51.380 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d69ecbfd-a650-4ef5-a1d7-19a8692ec5cb lvol 150 00:34:51.380 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5e3c9e3b-6c09-4d9e-a9e4-f92fe5095175 00:34:51.380 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:34:51.380 10:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:51.637 [2024-12-05 10:06:39.203222] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:51.637 [2024-12-05 10:06:39.203408] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:51.637 true 00:34:51.637 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d69ecbfd-a650-4ef5-a1d7-19a8692ec5cb 00:34:51.637 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:51.895 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:51.895 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:52.152 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5e3c9e3b-6c09-4d9e-a9e4-f92fe5095175 00:34:52.410 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:34:52.669 [2024-12-05 10:06:40.131988] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:52.669 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:34:52.928 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=113393 00:34:52.928 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:52.928 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:52.928 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 113393 /var/tmp/bdevperf.sock 00:34:52.928 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 113393 ']' 00:34:52.928 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:52.928 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:52.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:52.928 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:52.928 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:52.928 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:52.928 [2024-12-05 10:06:40.485050] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:34:52.928 [2024-12-05 10:06:40.485235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113393 ] 00:34:53.187 [2024-12-05 10:06:40.672831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.445 [2024-12-05 10:06:40.815725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:54.012 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:54.012 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:34:54.012 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:54.012 Nvme0n1 00:34:54.270 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:54.529 [ 00:34:54.529 { 00:34:54.529 "aliases": [ 00:34:54.529 "5e3c9e3b-6c09-4d9e-a9e4-f92fe5095175" 00:34:54.529 ], 00:34:54.529 "assigned_rate_limits": { 00:34:54.529 "r_mbytes_per_sec": 0, 00:34:54.529 "rw_ios_per_sec": 0, 00:34:54.529 "rw_mbytes_per_sec": 0, 00:34:54.529 "w_mbytes_per_sec": 0 00:34:54.529 }, 00:34:54.529 "block_size": 4096, 00:34:54.529 "claimed": false, 00:34:54.529 "driver_specific": { 00:34:54.529 "mp_policy": "active_passive", 00:34:54.529 "nvme": [ 00:34:54.529 { 00:34:54.529 "ctrlr_data": { 00:34:54.529 "ana_reporting": false, 00:34:54.529 "cntlid": 1, 00:34:54.529 "firmware_revision": "25.01", 00:34:54.529 "model_number": "SPDK bdev Controller", 00:34:54.529 "multi_ctrlr": true, 00:34:54.529 "oacs": { 00:34:54.529 "firmware": 0, 00:34:54.529 "format": 0, 00:34:54.529 "ns_manage": 0, 00:34:54.529 "security": 0 00:34:54.529 }, 00:34:54.529 "serial_number": "SPDK0", 00:34:54.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:54.529 "vendor_id": "0x8086" 00:34:54.529 }, 00:34:54.529 "ns_data": { 00:34:54.529 "can_share": true, 00:34:54.529 "id": 1 00:34:54.529 }, 00:34:54.529 "trid": { 00:34:54.529 "adrfam": "IPv4", 00:34:54.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:54.529 "traddr": "10.0.0.3", 00:34:54.529 "trsvcid": "4420", 00:34:54.529 "trtype": "TCP" 00:34:54.529 }, 00:34:54.529 "vs": { 00:34:54.529 "nvme_version": "1.3" 00:34:54.529 } 00:34:54.529 } 00:34:54.529 ] 00:34:54.529 }, 00:34:54.529 "memory_domains": [ 00:34:54.529 { 00:34:54.529 "dma_device_id": "system", 00:34:54.529 "dma_device_type": 1 00:34:54.529 } 00:34:54.529 ], 00:34:54.529 "name": "Nvme0n1", 00:34:54.529 "num_blocks": 38912, 00:34:54.529 "numa_id": -1, 00:34:54.529 "product_name": "NVMe disk", 00:34:54.529 "supported_io_types": { 00:34:54.529 "abort": true, 00:34:54.529 "compare": true, 00:34:54.529 "compare_and_write": true, 00:34:54.529 "copy": true, 00:34:54.529 "flush": true, 00:34:54.529 "get_zone_info": false, 00:34:54.529 "nvme_admin": true, 00:34:54.529 "nvme_io": true, 00:34:54.529 "nvme_io_md": false, 00:34:54.529 "nvme_iov_md": false, 00:34:54.529 "read": true, 00:34:54.529 "reset": true, 00:34:54.529 "seek_data": false, 00:34:54.529 "seek_hole": false, 00:34:54.529 "unmap": true, 00:34:54.529 "write": true, 00:34:54.529 "write_zeroes": true, 00:34:54.529 "zcopy": false, 00:34:54.529 "zone_append": false, 00:34:54.529 "zone_management": false 00:34:54.529 }, 00:34:54.529 "uuid": "5e3c9e3b-6c09-4d9e-a9e4-f92fe5095175", 00:34:54.529 "zoned": false 00:34:54.529 } 00:34:54.529 ] 00:34:54.529 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=113440 00:34:54.529 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:54.529 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:54.529 Running I/O for 10 seconds... 00:34:55.466 Latency(us) 00:34:55.466 [2024-12-05T10:06:43.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.466 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:55.466 Nvme0n1 : 1.00 5529.00 21.60 0.00 0.00 0.00 0.00 0.00 00:34:55.466 [2024-12-05T10:06:43.095Z] =================================================================================================================== 00:34:55.466 [2024-12-05T10:06:43.095Z] Total : 5529.00 21.60 0.00 0.00 0.00 0.00 0.00 00:34:55.466 00:34:56.402 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d69ecbfd-a650-4ef5-a1d7-19a8692ec5cb 00:34:56.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:56.668 Nvme0n1 : 2.00 5749.00 22.46 0.00 0.00 0.00 0.00 0.00 00:34:56.668 [2024-12-05T10:06:44.297Z] =================================================================================================================== 00:34:56.668 [2024-12-05T10:06:44.297Z] Total : 5749.00 22.46 0.00 0.00 0.00 0.00 0.00 00:34:56.668 00:34:56.668 true 00:34:56.668 10:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:56.668 10:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d69ecbfd-a650-4ef5-a1d7-19a8692ec5cb 00:34:57.308 10:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:57.308 10:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:57.308 10:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 113440 00:34:57.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:57.567 Nvme0n1 : 3.00 5844.67 22.83 0.00 0.00 0.00 0.00 0.00 00:34:57.567 [2024-12-05T10:06:45.196Z] =================================================================================================================== 00:34:57.567 [2024-12-05T10:06:45.196Z] Total : 5844.67 22.83 0.00 0.00 0.00 0.00 0.00 00:34:57.567 00:34:58.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:58.501 Nvme0n1 : 4.00 5893.75 23.02 0.00 0.00 0.00 0.00 0.00 00:34:58.501 [2024-12-05T10:06:46.130Z] =================================================================================================================== 00:34:58.501 [2024-12-05T10:06:46.130Z] Total : 5893.75 23.02 0.00 0.00 0.00 0.00 0.00 00:34:58.501 00:34:59.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:59.878 Nvme0n1 : 5.00 5924.00 23.14 0.00 0.00 0.00 0.00 0.00 00:34:59.878 [2024-12-05T10:06:47.507Z] =================================================================================================================== 00:34:59.878 [2024-12-05T10:06:47.507Z] Total : 5924.00 23.14 0.00 0.00 0.00 0.00 0.00 00:34:59.878 00:35:00.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:00.445 Nvme0n1 : 6.00 5943.33 23.22 0.00 0.00 0.00 0.00 0.00 00:35:00.445 [2024-12-05T10:06:48.074Z] =================================================================================================================== 00:35:00.445 [2024-12-05T10:06:48.075Z] Total : 5943.33 23.22 0.00 0.00 0.00 0.00 0.00 00:35:00.446 00:35:01.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:01.823 Nvme0n1 : 7.00 5953.43 23.26 0.00 0.00 0.00 0.00 0.00 00:35:01.823 [2024-12-05T10:06:49.452Z] =================================================================================================================== 00:35:01.823 [2024-12-05T10:06:49.452Z] Total : 5953.43 23.26 0.00 0.00 0.00 0.00 0.00 00:35:01.823 00:35:02.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:02.759 Nvme0n1 : 8.00 5944.50 23.22 0.00 0.00 0.00 0.00 0.00 00:35:02.759 [2024-12-05T10:06:50.388Z] =================================================================================================================== 00:35:02.759 [2024-12-05T10:06:50.388Z] Total : 5944.50 23.22 0.00 0.00 0.00 0.00 0.00 00:35:02.759 00:35:03.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:03.695 Nvme0n1 : 9.00 5961.67 23.29 0.00 0.00 0.00 0.00 0.00 00:35:03.695 [2024-12-05T10:06:51.324Z] =================================================================================================================== 00:35:03.695 [2024-12-05T10:06:51.324Z] Total : 5961.67 23.29 0.00 0.00 0.00 0.00 0.00 00:35:03.695 00:35:04.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:04.633 Nvme0n1 : 10.00 5959.50 23.28 0.00 0.00 0.00 0.00 0.00 00:35:04.633 [2024-12-05T10:06:52.262Z] =================================================================================================================== 00:35:04.633 [2024-12-05T10:06:52.262Z] Total : 5959.50 23.28 0.00 0.00 0.00 0.00 0.00 00:35:04.633 00:35:04.633 00:35:04.633 Latency(us) 00:35:04.633 [2024-12-05T10:06:52.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:04.633 Nvme0n1 : 10.01 5967.13 23.31 0.00 0.00 21446.49 9234.62 49092.42 00:35:04.633 [2024-12-05T10:06:52.262Z] =================================================================================================================== 00:35:04.633 [2024-12-05T10:06:52.262Z] Total : 5967.13 23.31 0.00 0.00 21446.49 9234.62 49092.42 00:35:04.633 { 00:35:04.633 "results": [ 00:35:04.633 { 00:35:04.633 "job": "Nvme0n1", 00:35:04.633 "core_mask": "0x2", 00:35:04.633 "workload": "randwrite", 00:35:04.633 "status": "finished", 00:35:04.633 "queue_depth": 128, 00:35:04.633 "io_size": 4096, 00:35:04.633 "runtime": 10.008661, 00:35:04.633 "iops": 5967.131867089914, 00:35:04.633 "mibps": 23.309108855819975, 00:35:04.633 "io_failed": 0, 00:35:04.633 "io_timeout": 0, 00:35:04.633 "avg_latency_us": 21446.487484614576, 00:35:04.633 "min_latency_us": 9234.618181818181, 00:35:04.633 "max_latency_us": 49092.42181818182 00:35:04.633 } 00:35:04.633 ], 00:35:04.633 "core_count": 1 00:35:04.633 } 00:35:04.633 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 113393 00:35:04.633 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 113393 ']' 00:35:04.633 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 113393 00:35:04.633 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:35:04.633 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:04.633 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113393 00:35:04.633 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:04.633 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:04.633 killing process with pid 113393 00:35:04.633 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113393' 00:35:04.633 Received shutdown signal, test time was about 10.000000 seconds 00:35:04.633 00:35:04.633 Latency(us) 00:35:04.633 [2024-12-05T10:06:52.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.633 [2024-12-05T10:06:52.262Z] =================================================================================================================== 00:35:04.633 [2024-12-05T10:06:52.262Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:04.633 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 113393 00:35:04.633 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 113393 00:35:05.572 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:35:05.831 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:06.090 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d69ecbfd-a650-4ef5-a1d7-19a8692ec5cb 00:35:06.090 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:35:06.349 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:35:06.349 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:35:06.349 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:06.608 [2024-12-05 10:06:54.079377] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:35:06.609 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d69ecbfd-a650-4ef5-a1d7-19a8692ec5cb 00:35:06.609 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:35:06.609 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d69ecbfd-a650-4ef5-a1d7-19a8692ec5cb 00:35:06.609 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:06.609 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.609 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:06.609 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.609 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:06.609 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.609 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:06.609 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:06.609 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d69ecbfd-a650-4ef5-a1d7-19a8692ec5cb 00:35:06.868 2024/12/05 10:06:54 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:d69ecbfd-a650-4ef5-a1d7-19a8692ec5cb], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:35:06.868 request: 00:35:06.868 { 00:35:06.868 "method": "bdev_lvol_get_lvstores", 00:35:06.868 "params": { 00:35:06.868 "uuid": "d69ecbfd-a650-4ef5-a1d7-19a8692ec5cb" 00:35:06.868 } 00:35:06.868 } 00:35:06.868 Got JSON-RPC error response 00:35:06.868 GoRPCClient: error on JSON-RPC call 00:35:06.868 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:35:06.868 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:06.868 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:06.868 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:06.868 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:07.126 aio_bdev 00:35:07.126 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5e3c9e3b-6c09-4d9e-a9e4-f92fe5095175 00:35:07.126 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=5e3c9e3b-6c09-4d9e-a9e4-f92fe5095175 00:35:07.127 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:07.127 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:35:07.127 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:07.127 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:07.127 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:07.386 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e3c9e3b-6c09-4d9e-a9e4-f92fe5095175 -t 2000 00:35:07.644 [ 00:35:07.644 { 00:35:07.644 "aliases": [ 00:35:07.644 "lvs/lvol" 00:35:07.644 ], 00:35:07.644 "assigned_rate_limits": { 00:35:07.644 "r_mbytes_per_sec": 0, 00:35:07.644 "rw_ios_per_sec": 0, 00:35:07.644 "rw_mbytes_per_sec": 0, 00:35:07.644 "w_mbytes_per_sec": 0 00:35:07.644 }, 00:35:07.644 "block_size": 4096, 00:35:07.644 "claimed": false, 00:35:07.644 "driver_specific": { 00:35:07.644 "lvol": { 00:35:07.644 "base_bdev": "aio_bdev", 00:35:07.644 "clone": false, 00:35:07.644 "esnap_clone": false, 00:35:07.644 "lvol_store_uuid": "d69ecbfd-a650-4ef5-a1d7-19a8692ec5cb", 00:35:07.644 "num_allocated_clusters": 38, 00:35:07.644 "snapshot": false, 00:35:07.644 "thin_provision": false 00:35:07.644 } 00:35:07.644 }, 00:35:07.644 "name": "5e3c9e3b-6c09-4d9e-a9e4-f92fe5095175", 00:35:07.644 "num_blocks": 38912, 00:35:07.644 "product_name": "Logical Volume", 00:35:07.644 "supported_io_types": { 00:35:07.644 "abort": false, 00:35:07.644 "compare": false, 00:35:07.644 "compare_and_write": false, 00:35:07.644 "copy": false, 00:35:07.644 "flush": false, 00:35:07.644 "get_zone_info": false, 00:35:07.644 "nvme_admin": false, 00:35:07.644 "nvme_io": false, 00:35:07.644 "nvme_io_md": false, 00:35:07.644 "nvme_iov_md": false, 00:35:07.644 "read": true, 00:35:07.644 "reset": true, 00:35:07.644 "seek_data": true, 00:35:07.644 "seek_hole": true, 00:35:07.644 "unmap": true, 00:35:07.644 "write": true, 00:35:07.644 "write_zeroes": true, 00:35:07.644 "zcopy": false, 00:35:07.644 "zone_append": false, 00:35:07.644 "zone_management": false 00:35:07.644 }, 00:35:07.644 "uuid": "5e3c9e3b-6c09-4d9e-a9e4-f92fe5095175", 00:35:07.644 "zoned": false 00:35:07.644 } 00:35:07.644 ] 00:35:07.644 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:35:07.644 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d69ecbfd-a650-4ef5-a1d7-19a8692ec5cb 00:35:07.644 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:35:07.902 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:35:07.902 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:35:07.902 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d69ecbfd-a650-4ef5-a1d7-19a8692ec5cb 00:35:08.160 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:35:08.160 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5e3c9e3b-6c09-4d9e-a9e4-f92fe5095175 00:35:08.418 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d69ecbfd-a650-4ef5-a1d7-19a8692ec5cb 00:35:08.418 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:08.984 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:35:09.243 ************************************ 00:35:09.243 END TEST lvs_grow_clean 00:35:09.243 ************************************ 00:35:09.243 00:35:09.243 real 0m18.967s 00:35:09.243 user 0m18.318s 00:35:09.243 sys 0m2.235s 00:35:09.243 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:09.243 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:35:09.243 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:35:09.243 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:09.243 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:09.243 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:09.243 ************************************ 00:35:09.243 START TEST lvs_grow_dirty 00:35:09.243 ************************************ 00:35:09.243 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:35:09.243 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:35:09.243 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:35:09.243 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:35:09.243 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:35:09.243 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:35:09.243 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:35:09.243 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:35:09.243 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:35:09.243 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:09.501 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:35:09.501 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:35:09.759 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=937434e4-554b-4aee-82cc-828b83ecb774 00:35:09.759 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 937434e4-554b-4aee-82cc-828b83ecb774 00:35:09.759 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:35:10.017 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:35:10.017 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:35:10.017 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 937434e4-554b-4aee-82cc-828b83ecb774 lvol 150 00:35:10.275 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ef31877d-a42e-4891-a84b-4ae164978b8d 00:35:10.275 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:35:10.275 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:35:10.533 [2024-12-05 10:06:58.087247] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:35:10.533 [2024-12-05 10:06:58.088250] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:35:10.533 true 00:35:10.533 10:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:35:10.533 10:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 937434e4-554b-4aee-82cc-828b83ecb774 00:35:10.791 10:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:35:10.792 10:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:11.049 10:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ef31877d-a42e-4891-a84b-4ae164978b8d 00:35:11.308 10:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:35:11.566 [2024-12-05 10:06:58.939799] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:11.566 10:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:35:11.824 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=113822 00:35:11.824 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:11.825 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:35:11.825 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 113822 /var/tmp/bdevperf.sock 00:35:11.825 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 113822 ']' 00:35:11.825 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:11.825 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:11.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:11.825 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:11.825 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:11.825 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:11.825 [2024-12-05 10:06:59.336456] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:35:11.825 [2024-12-05 10:06:59.336623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113822 ] 00:35:12.083 [2024-12-05 10:06:59.517716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.083 [2024-12-05 10:06:59.631717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.651 10:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:12.651 10:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:35:12.652 10:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:35:12.911 Nvme0n1 00:35:12.911 10:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:35:13.170 [ 00:35:13.170 { 00:35:13.170 "aliases": [ 00:35:13.170 "ef31877d-a42e-4891-a84b-4ae164978b8d" 00:35:13.170 ], 00:35:13.170 "assigned_rate_limits": { 00:35:13.170 "r_mbytes_per_sec": 0, 00:35:13.170 "rw_ios_per_sec": 0, 00:35:13.170 "rw_mbytes_per_sec": 0, 00:35:13.170 "w_mbytes_per_sec": 0 00:35:13.170 }, 00:35:13.170 "block_size": 4096, 00:35:13.170 "claimed": false, 00:35:13.170 "driver_specific": { 00:35:13.170 "mp_policy": "active_passive", 00:35:13.170 "nvme": [ 00:35:13.170 { 00:35:13.170 "ctrlr_data": { 00:35:13.170 "ana_reporting": false, 00:35:13.170 "cntlid": 1, 00:35:13.170 "firmware_revision": "25.01", 00:35:13.170 "model_number": "SPDK bdev Controller", 00:35:13.170 "multi_ctrlr": true, 00:35:13.170 "oacs": { 00:35:13.170 "firmware": 0, 00:35:13.170 "format": 0, 00:35:13.170 "ns_manage": 0, 00:35:13.170 "security": 0 00:35:13.170 }, 00:35:13.170 "serial_number": "SPDK0", 00:35:13.170 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:13.170 "vendor_id": "0x8086" 00:35:13.170 }, 00:35:13.170 "ns_data": { 00:35:13.170 "can_share": true, 00:35:13.170 "id": 1 00:35:13.170 }, 00:35:13.170 "trid": { 00:35:13.170 "adrfam": "IPv4", 00:35:13.171 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:13.171 "traddr": "10.0.0.3", 00:35:13.171 "trsvcid": "4420", 00:35:13.171 "trtype": "TCP" 00:35:13.171 }, 00:35:13.171 "vs": { 00:35:13.171 "nvme_version": "1.3" 00:35:13.171 } 00:35:13.171 } 00:35:13.171 ] 00:35:13.171 }, 00:35:13.171 "memory_domains": [ 00:35:13.171 { 00:35:13.171 "dma_device_id": "system", 00:35:13.171 "dma_device_type": 1 00:35:13.171 } 00:35:13.171 ], 00:35:13.171 "name": "Nvme0n1", 00:35:13.171 "num_blocks": 38912, 00:35:13.171 "numa_id": -1, 00:35:13.171 "product_name": "NVMe disk", 00:35:13.171 "supported_io_types": { 00:35:13.171 "abort": true, 00:35:13.171 "compare": true, 00:35:13.171 "compare_and_write": true, 00:35:13.171 "copy": true, 00:35:13.171 "flush": true, 00:35:13.171 "get_zone_info": false, 00:35:13.171 "nvme_admin": true, 00:35:13.171 "nvme_io": true, 00:35:13.171 "nvme_io_md": false, 00:35:13.171 "nvme_iov_md": false, 00:35:13.171 "read": true, 00:35:13.171 "reset": true, 00:35:13.171 "seek_data": false, 00:35:13.171 "seek_hole": false, 00:35:13.171 "unmap": true, 00:35:13.171 "write": true, 00:35:13.171 "write_zeroes": true, 00:35:13.171 "zcopy": false, 00:35:13.171 "zone_append": false, 00:35:13.171 "zone_management": false 00:35:13.171 }, 00:35:13.171 "uuid": "ef31877d-a42e-4891-a84b-4ae164978b8d", 00:35:13.171 "zoned": false 00:35:13.171 } 00:35:13.171 ] 00:35:13.171 10:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=113870 00:35:13.171 10:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:13.171 10:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:35:13.171 Running I/O for 10 seconds... 00:35:14.548 Latency(us) 00:35:14.548 [2024-12-05T10:07:02.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:14.548 Nvme0n1 : 1.00 5783.00 22.59 0.00 0.00 0.00 0.00 0.00 00:35:14.548 [2024-12-05T10:07:02.177Z] =================================================================================================================== 00:35:14.548 [2024-12-05T10:07:02.177Z] Total : 5783.00 22.59 0.00 0.00 0.00 0.00 0.00 00:35:14.548 00:35:15.116 10:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 937434e4-554b-4aee-82cc-828b83ecb774 00:35:15.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:15.375 Nvme0n1 : 2.00 6005.50 23.46 0.00 0.00 0.00 0.00 0.00 00:35:15.375 [2024-12-05T10:07:03.004Z] =================================================================================================================== 00:35:15.375 [2024-12-05T10:07:03.004Z] Total : 6005.50 23.46 0.00 0.00 0.00 0.00 0.00 00:35:15.375 00:35:15.633 true 00:35:15.633 10:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 937434e4-554b-4aee-82cc-828b83ecb774 00:35:15.633 10:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:35:15.892 10:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:35:15.892 10:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:35:15.892 10:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 113870 00:35:16.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:16.459 Nvme0n1 : 3.00 5954.00 23.26 0.00 0.00 0.00 0.00 0.00 00:35:16.459 [2024-12-05T10:07:04.088Z] =================================================================================================================== 00:35:16.459 [2024-12-05T10:07:04.088Z] Total : 5954.00 23.26 0.00 0.00 0.00 0.00 0.00 00:35:16.459 00:35:17.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:17.395 Nvme0n1 : 4.00 5956.00 23.27 0.00 0.00 0.00 0.00 0.00 00:35:17.395 [2024-12-05T10:07:05.024Z] =================================================================================================================== 00:35:17.395 [2024-12-05T10:07:05.024Z] Total : 5956.00 23.27 0.00 0.00 0.00 0.00 0.00 00:35:17.395 00:35:18.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:18.330 Nvme0n1 : 5.00 5879.00 22.96 0.00 0.00 0.00 0.00 0.00 00:35:18.330 [2024-12-05T10:07:05.959Z] =================================================================================================================== 00:35:18.330 [2024-12-05T10:07:05.959Z] Total : 5879.00 22.96 0.00 0.00 0.00 0.00 0.00 00:35:18.330 00:35:19.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:19.264 Nvme0n1 : 6.00 5880.67 22.97 0.00 0.00 0.00 0.00 0.00 00:35:19.264 [2024-12-05T10:07:06.893Z] =================================================================================================================== 00:35:19.264 [2024-12-05T10:07:06.893Z] Total : 5880.67 22.97 0.00 0.00 0.00 0.00 0.00 00:35:19.264 00:35:20.200 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:20.200 Nvme0n1 : 7.00 5876.00 22.95 0.00 0.00 0.00 0.00 0.00 00:35:20.200 [2024-12-05T10:07:07.829Z] =================================================================================================================== 00:35:20.200 [2024-12-05T10:07:07.829Z] Total : 5876.00 22.95 0.00 0.00 0.00 0.00 0.00 00:35:20.200 00:35:21.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:21.578 Nvme0n1 : 8.00 5873.00 22.94 0.00 0.00 0.00 0.00 0.00 00:35:21.578 [2024-12-05T10:07:09.207Z] =================================================================================================================== 00:35:21.578 [2024-12-05T10:07:09.207Z] Total : 5873.00 22.94 0.00 0.00 0.00 0.00 0.00 00:35:21.578 00:35:22.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:22.515 Nvme0n1 : 9.00 5881.44 22.97 0.00 0.00 0.00 0.00 0.00 00:35:22.515 [2024-12-05T10:07:10.144Z] =================================================================================================================== 00:35:22.515 [2024-12-05T10:07:10.144Z] Total : 5881.44 22.97 0.00 0.00 0.00 0.00 0.00 00:35:22.515 00:35:23.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:23.452 Nvme0n1 : 10.00 5865.50 22.91 0.00 0.00 0.00 0.00 0.00 00:35:23.452 [2024-12-05T10:07:11.081Z] =================================================================================================================== 00:35:23.452 [2024-12-05T10:07:11.081Z] Total : 5865.50 22.91 0.00 0.00 0.00 0.00 0.00 00:35:23.452 00:35:23.452 00:35:23.452 Latency(us) 00:35:23.452 [2024-12-05T10:07:11.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:23.452 Nvme0n1 : 10.01 5869.98 22.93 0.00 0.00 21800.46 10426.18 106764.10 00:35:23.452 [2024-12-05T10:07:11.081Z] =================================================================================================================== 00:35:23.452 [2024-12-05T10:07:11.081Z] Total : 5869.98 22.93 0.00 0.00 21800.46 10426.18 106764.10 00:35:23.452 { 00:35:23.452 "results": [ 00:35:23.452 { 00:35:23.452 "job": "Nvme0n1", 00:35:23.452 "core_mask": "0x2", 00:35:23.452 "workload": "randwrite", 00:35:23.452 "status": "finished", 00:35:23.452 "queue_depth": 128, 00:35:23.452 "io_size": 4096, 00:35:23.452 "runtime": 10.01418, 00:35:23.452 "iops": 5869.9763735023735, 00:35:23.452 "mibps": 22.929595208993646, 00:35:23.452 "io_failed": 0, 00:35:23.452 "io_timeout": 0, 00:35:23.452 "avg_latency_us": 21800.459238276988, 00:35:23.452 "min_latency_us": 10426.181818181818, 00:35:23.452 "max_latency_us": 106764.10181818182 00:35:23.452 } 00:35:23.452 ], 00:35:23.452 "core_count": 1 00:35:23.452 } 00:35:23.452 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 113822 00:35:23.452 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 113822 ']' 00:35:23.452 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 113822 00:35:23.452 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:35:23.452 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:23.452 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113822 00:35:23.452 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:23.452 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:23.452 killing process with pid 113822 00:35:23.452 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113822' 00:35:23.452 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 113822 00:35:23.452 Received shutdown signal, test time was about 10.000000 seconds 00:35:23.452 00:35:23.452 Latency(us) 00:35:23.452 [2024-12-05T10:07:11.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.452 [2024-12-05T10:07:11.081Z] =================================================================================================================== 00:35:23.452 [2024-12-05T10:07:11.081Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:23.452 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 113822 00:35:24.418 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:35:24.418 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:24.677 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 937434e4-554b-4aee-82cc-828b83ecb774 00:35:24.677 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:35:24.936 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:35:24.936 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:35:24.936 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 113232 00:35:24.936 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 113232 00:35:24.936 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 113232 Killed "${NVMF_APP[@]}" "$@" 00:35:24.936 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:35:24.936 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:35:24.936 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:24.936 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:24.936 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:24.936 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=114026 00:35:24.936 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:35:24.936 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 114026 00:35:24.936 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 114026 ']' 00:35:24.936 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:24.936 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:24.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:24.936 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:24.936 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:24.936 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:25.194 [2024-12-05 10:07:12.658608] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:25.194 [2024-12-05 10:07:12.661103] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:35:25.194 [2024-12-05 10:07:12.661224] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:25.452 [2024-12-05 10:07:12.841096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.452 [2024-12-05 10:07:12.947257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:25.452 [2024-12-05 10:07:12.947354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:25.452 [2024-12-05 10:07:12.947386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:25.452 [2024-12-05 10:07:12.947402] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:25.452 [2024-12-05 10:07:12.947413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:25.452 [2024-12-05 10:07:12.948616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.712 [2024-12-05 10:07:13.234306] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:25.712 [2024-12-05 10:07:13.234704] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:25.992 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:25.992 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:35:25.992 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:25.992 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:25.992 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:26.277 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:26.277 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:26.277 [2024-12-05 10:07:13.808548] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:35:26.277 [2024-12-05 10:07:13.809060] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:35:26.277 [2024-12-05 10:07:13.809367] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:35:26.277 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:35:26.277 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ef31877d-a42e-4891-a84b-4ae164978b8d 00:35:26.277 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ef31877d-a42e-4891-a84b-4ae164978b8d 00:35:26.277 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:26.277 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:35:26.277 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:26.277 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:26.277 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:26.547 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef31877d-a42e-4891-a84b-4ae164978b8d -t 2000 00:35:26.804 [ 00:35:26.804 { 00:35:26.804 "aliases": [ 00:35:26.804 "lvs/lvol" 00:35:26.804 ], 00:35:26.804 "assigned_rate_limits": { 00:35:26.804 "r_mbytes_per_sec": 0, 00:35:26.804 "rw_ios_per_sec": 0, 00:35:26.804 "rw_mbytes_per_sec": 0, 00:35:26.804 "w_mbytes_per_sec": 0 00:35:26.804 }, 00:35:26.804 "block_size": 4096, 00:35:26.804 "claimed": false, 00:35:26.804 "driver_specific": { 00:35:26.804 "lvol": { 00:35:26.804 "base_bdev": "aio_bdev", 00:35:26.804 "clone": false, 00:35:26.804 "esnap_clone": false, 00:35:26.804 "lvol_store_uuid": "937434e4-554b-4aee-82cc-828b83ecb774", 00:35:26.804 "num_allocated_clusters": 38, 00:35:26.804 "snapshot": false, 00:35:26.804 "thin_provision": false 00:35:26.804 } 00:35:26.804 }, 00:35:26.804 "name": "ef31877d-a42e-4891-a84b-4ae164978b8d", 00:35:26.804 "num_blocks": 38912, 00:35:26.804 "product_name": "Logical Volume", 00:35:26.804 "supported_io_types": { 00:35:26.805 "abort": false, 00:35:26.805 "compare": false, 00:35:26.805 "compare_and_write": false, 00:35:26.805 "copy": false, 00:35:26.805 "flush": false, 00:35:26.805 "get_zone_info": false, 00:35:26.805 "nvme_admin": false, 00:35:26.805 "nvme_io": false, 00:35:26.805 "nvme_io_md": false, 00:35:26.805 "nvme_iov_md": false, 00:35:26.805 "read": true, 00:35:26.805 "reset": true, 00:35:26.805 "seek_data": true, 00:35:26.805 "seek_hole": true, 00:35:26.805 "unmap": true, 00:35:26.805 "write": true, 00:35:26.805 "write_zeroes": true, 00:35:26.805 "zcopy": false, 00:35:26.805 "zone_append": false, 00:35:26.805 "zone_management": false 00:35:26.805 }, 00:35:26.805 "uuid": "ef31877d-a42e-4891-a84b-4ae164978b8d", 00:35:26.805 "zoned": false 00:35:26.805 } 00:35:26.805 ] 00:35:26.805 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:35:26.805 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:35:26.805 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 937434e4-554b-4aee-82cc-828b83ecb774 00:35:27.062 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:35:27.062 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:35:27.062 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 937434e4-554b-4aee-82cc-828b83ecb774 00:35:27.319 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:35:27.319 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:27.577 [2024-12-05 10:07:15.017852] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:35:27.577 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 937434e4-554b-4aee-82cc-828b83ecb774 00:35:27.577 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:35:27.577 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 937434e4-554b-4aee-82cc-828b83ecb774 00:35:27.577 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:27.577 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:27.577 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:27.577 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:27.577 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:27.577 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:27.577 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:27.577 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:27.577 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 937434e4-554b-4aee-82cc-828b83ecb774 00:35:27.835 2024/12/05 10:07:15 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:937434e4-554b-4aee-82cc-828b83ecb774], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:35:27.835 request: 00:35:27.835 { 00:35:27.835 "method": "bdev_lvol_get_lvstores", 00:35:27.835 "params": { 00:35:27.835 "uuid": "937434e4-554b-4aee-82cc-828b83ecb774" 00:35:27.835 } 00:35:27.835 } 00:35:27.835 Got JSON-RPC error response 00:35:27.835 GoRPCClient: error on JSON-RPC call 00:35:27.835 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:35:27.835 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:27.835 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:27.835 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:27.835 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:28.092 aio_bdev 00:35:28.092 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ef31877d-a42e-4891-a84b-4ae164978b8d 00:35:28.092 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ef31877d-a42e-4891-a84b-4ae164978b8d 00:35:28.092 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:28.092 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:35:28.092 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:28.092 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:28.092 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:28.349 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef31877d-a42e-4891-a84b-4ae164978b8d -t 2000 00:35:28.606 [ 00:35:28.606 { 00:35:28.606 "aliases": [ 00:35:28.606 "lvs/lvol" 00:35:28.606 ], 00:35:28.606 "assigned_rate_limits": { 00:35:28.606 "r_mbytes_per_sec": 0, 00:35:28.606 "rw_ios_per_sec": 0, 00:35:28.606 "rw_mbytes_per_sec": 0, 00:35:28.606 "w_mbytes_per_sec": 0 00:35:28.606 }, 00:35:28.606 "block_size": 4096, 00:35:28.606 "claimed": false, 00:35:28.606 "driver_specific": { 00:35:28.607 "lvol": { 00:35:28.607 "base_bdev": "aio_bdev", 00:35:28.607 "clone": false, 00:35:28.607 "esnap_clone": false, 00:35:28.607 "lvol_store_uuid": "937434e4-554b-4aee-82cc-828b83ecb774", 00:35:28.607 "num_allocated_clusters": 38, 00:35:28.607 "snapshot": false, 00:35:28.607 "thin_provision": false 00:35:28.607 } 00:35:28.607 }, 00:35:28.607 "name": "ef31877d-a42e-4891-a84b-4ae164978b8d", 00:35:28.607 "num_blocks": 38912, 00:35:28.607 "product_name": "Logical Volume", 00:35:28.607 "supported_io_types": { 00:35:28.607 "abort": false, 00:35:28.607 "compare": false, 00:35:28.607 "compare_and_write": false, 00:35:28.607 "copy": false, 00:35:28.607 "flush": false, 00:35:28.607 "get_zone_info": false, 00:35:28.607 "nvme_admin": false, 00:35:28.607 "nvme_io": false, 00:35:28.607 "nvme_io_md": false, 00:35:28.607 "nvme_iov_md": false, 00:35:28.607 "read": true, 00:35:28.607 "reset": true, 00:35:28.607 "seek_data": true, 00:35:28.607 "seek_hole": true, 00:35:28.607 "unmap": true, 00:35:28.607 "write": true, 00:35:28.607 "write_zeroes": true, 00:35:28.607 "zcopy": false, 00:35:28.607 "zone_append": false, 00:35:28.607 "zone_management": false 00:35:28.607 }, 00:35:28.607 "uuid": "ef31877d-a42e-4891-a84b-4ae164978b8d", 00:35:28.607 "zoned": false 00:35:28.607 } 00:35:28.607 ] 00:35:28.607 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:35:28.607 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:35:28.607 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 937434e4-554b-4aee-82cc-828b83ecb774 00:35:28.607 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:35:28.607 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 937434e4-554b-4aee-82cc-828b83ecb774 00:35:28.607 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:35:28.864 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:35:28.864 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ef31877d-a42e-4891-a84b-4ae164978b8d 00:35:29.123 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 937434e4-554b-4aee-82cc-828b83ecb774 00:35:29.381 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:29.639 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:35:30.206 00:35:30.206 real 0m20.817s 00:35:30.206 user 0m26.848s 00:35:30.206 sys 0m10.484s 00:35:30.206 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:30.206 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:30.206 ************************************ 00:35:30.206 END TEST lvs_grow_dirty 00:35:30.206 ************************************ 00:35:30.206 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:35:30.206 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:35:30.206 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:35:30.206 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:35:30.206 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:35:30.206 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:35:30.206 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:35:30.206 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:35:30.206 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:35:30.206 nvmf_trace.0 00:35:30.206 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:35:30.206 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:35:30.206 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:30.206 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:35:30.206 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:30.206 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:35:30.206 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:30.206 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:30.206 rmmod nvme_tcp 00:35:30.463 rmmod nvme_fabrics 00:35:30.463 rmmod nvme_keyring 00:35:30.463 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:30.463 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:35:30.463 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:35:30.463 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 114026 ']' 00:35:30.463 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 114026 00:35:30.463 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 114026 ']' 00:35:30.463 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 114026 00:35:30.463 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:35:30.463 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:30.463 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114026 00:35:30.464 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:30.464 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:30.464 killing process with pid 114026 00:35:30.464 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114026' 00:35:30.464 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 114026 00:35:30.464 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 114026 00:35:31.400 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:31.400 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:31.400 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:31.400 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:35:31.400 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:35:31.400 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:31.400 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:35:31.400 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:31.400 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:35:31.400 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:35:31.400 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:35:31.400 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:35:31.400 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:35:31.400 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:35:31.400 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:35:31.659 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:35:31.659 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:35:31.659 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:35:31.659 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:35:31.659 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:35:31.659 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:31.659 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:31.659 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:35:31.659 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.659 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:31.659 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.659 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:35:31.659 00:35:31.659 real 0m43.584s 00:35:31.659 user 0m47.760s 00:35:31.659 sys 0m13.656s 00:35:31.659 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:31.659 ************************************ 00:35:31.659 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:31.659 END TEST nvmf_lvs_grow 00:35:31.659 ************************************ 00:35:31.659 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:35:31.659 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:31.660 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:31.660 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:31.660 ************************************ 00:35:31.660 START TEST nvmf_bdev_io_wait 00:35:31.660 ************************************ 00:35:31.660 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:35:31.921 * Looking for test storage... 00:35:31.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:31.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.921 --rc genhtml_branch_coverage=1 00:35:31.921 --rc genhtml_function_coverage=1 00:35:31.921 --rc genhtml_legend=1 00:35:31.921 --rc geninfo_all_blocks=1 00:35:31.921 --rc geninfo_unexecuted_blocks=1 00:35:31.921 00:35:31.921 ' 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:31.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.921 --rc genhtml_branch_coverage=1 00:35:31.921 --rc genhtml_function_coverage=1 00:35:31.921 --rc genhtml_legend=1 00:35:31.921 --rc geninfo_all_blocks=1 00:35:31.921 --rc geninfo_unexecuted_blocks=1 00:35:31.921 00:35:31.921 ' 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:31.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.921 --rc genhtml_branch_coverage=1 00:35:31.921 --rc genhtml_function_coverage=1 00:35:31.921 --rc genhtml_legend=1 00:35:31.921 --rc geninfo_all_blocks=1 00:35:31.921 --rc geninfo_unexecuted_blocks=1 00:35:31.921 00:35:31.921 ' 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:31.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.921 --rc genhtml_branch_coverage=1 00:35:31.921 --rc genhtml_function_coverage=1 00:35:31.921 --rc genhtml_legend=1 00:35:31.921 --rc geninfo_all_blocks=1 00:35:31.921 --rc geninfo_unexecuted_blocks=1 00:35:31.921 00:35:31.921 ' 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.921 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:35:31.922 Cannot find device "nvmf_init_br" 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:35:31.922 Cannot find device "nvmf_init_br2" 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:35:31.922 Cannot find device "nvmf_tgt_br" 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:35:31.922 Cannot find device "nvmf_tgt_br2" 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:35:31.922 Cannot find device "nvmf_init_br" 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:35:31.922 Cannot find device "nvmf_init_br2" 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:35:31.922 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:35:32.182 Cannot find device "nvmf_tgt_br" 00:35:32.182 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:35:32.182 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:35:32.182 Cannot find device "nvmf_tgt_br2" 00:35:32.182 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:35:32.182 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:35:32.182 Cannot find device "nvmf_br" 00:35:32.182 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:35:32.182 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:35:32.182 Cannot find device "nvmf_init_if" 00:35:32.182 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:35:32.182 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:35:32.182 Cannot find device "nvmf_init_if2" 00:35:32.182 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:35:32.182 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:32.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:32.182 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:35:32.182 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:32.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:32.182 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:35:32.182 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:35:32.182 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:32.182 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:35:32.182 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:32.183 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:32.183 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:32.183 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:32.183 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:32.183 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:35:32.183 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:35:32.183 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:35:32.183 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:35:32.183 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:35:32.183 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:35:32.183 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:35:32.183 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:35:32.183 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:35:32.183 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:32.183 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:32.183 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:32.183 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:35:32.183 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:35:32.442 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:35:32.442 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:35:32.442 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:32.442 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:32.442 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:32.442 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:35:32.442 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:35:32.442 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:35:32.442 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:32.442 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:35:32.442 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:35:32.442 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:32.442 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.125 ms 00:35:32.442 00:35:32.442 --- 10.0.0.3 ping statistics --- 00:35:32.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.442 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:35:32.442 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:35:32.442 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:35:32.442 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:35:32.442 00:35:32.442 --- 10.0.0.4 ping statistics --- 00:35:32.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.442 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:35:32.442 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:32.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:32.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:35:32.442 00:35:32.442 --- 10.0.0.1 ping statistics --- 00:35:32.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.442 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:35:32.442 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:35:32.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:32.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:35:32.442 00:35:32.442 --- 10.0.0.2 ping statistics --- 00:35:32.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.442 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:35:32.442 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:32.442 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=114504 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 114504 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 114504 ']' 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:32.443 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:32.701 [2024-12-05 10:07:20.070599] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:32.701 [2024-12-05 10:07:20.074039] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:35:32.701 [2024-12-05 10:07:20.074188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:32.701 [2024-12-05 10:07:20.266728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:32.959 [2024-12-05 10:07:20.373837] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:32.959 [2024-12-05 10:07:20.373919] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:32.959 [2024-12-05 10:07:20.373935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:32.959 [2024-12-05 10:07:20.373960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:32.959 [2024-12-05 10:07:20.373971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:32.959 [2024-12-05 10:07:20.376021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:32.959 [2024-12-05 10:07:20.376190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:32.959 [2024-12-05 10:07:20.376354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:32.959 [2024-12-05 10:07:20.376897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:32.959 [2024-12-05 10:07:20.377204] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:33.525 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:33.525 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:35:33.525 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:33.525 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:33.525 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:33.525 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:33.525 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:35:33.525 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.525 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:33.525 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.525 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:35:33.525 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.525 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:33.782 [2024-12-05 10:07:21.376399] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:33.782 [2024-12-05 10:07:21.377701] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:33.782 [2024-12-05 10:07:21.379125] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:33.782 [2024-12-05 10:07:21.380073] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:33.782 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.782 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:33.782 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.782 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:33.782 [2024-12-05 10:07:21.389633] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:33.782 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.782 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:33.782 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.782 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:34.040 Malloc0 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:34.040 [2024-12-05 10:07:21.522150] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=114557 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=114559 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:34.040 { 00:35:34.040 "params": { 00:35:34.040 "name": "Nvme$subsystem", 00:35:34.040 "trtype": "$TEST_TRANSPORT", 00:35:34.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.040 "adrfam": "ipv4", 00:35:34.040 "trsvcid": "$NVMF_PORT", 00:35:34.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.040 "hdgst": ${hdgst:-false}, 00:35:34.040 "ddgst": ${ddgst:-false} 00:35:34.040 }, 00:35:34.040 "method": "bdev_nvme_attach_controller" 00:35:34.040 } 00:35:34.040 EOF 00:35:34.040 )") 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=114561 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:34.040 { 00:35:34.040 "params": { 00:35:34.040 "name": "Nvme$subsystem", 00:35:34.040 "trtype": "$TEST_TRANSPORT", 00:35:34.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.040 "adrfam": "ipv4", 00:35:34.040 "trsvcid": "$NVMF_PORT", 00:35:34.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.040 "hdgst": ${hdgst:-false}, 00:35:34.040 "ddgst": ${ddgst:-false} 00:35:34.040 }, 00:35:34.040 "method": "bdev_nvme_attach_controller" 00:35:34.040 } 00:35:34.040 EOF 00:35:34.040 )") 00:35:34.040 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:34.041 { 00:35:34.041 "params": { 00:35:34.041 "name": "Nvme$subsystem", 00:35:34.041 "trtype": "$TEST_TRANSPORT", 00:35:34.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.041 "adrfam": "ipv4", 00:35:34.041 "trsvcid": "$NVMF_PORT", 00:35:34.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.041 "hdgst": ${hdgst:-false}, 00:35:34.041 "ddgst": ${ddgst:-false} 00:35:34.041 }, 00:35:34.041 "method": "bdev_nvme_attach_controller" 00:35:34.041 } 00:35:34.041 EOF 00:35:34.041 )") 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=114564 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:34.041 "params": { 00:35:34.041 "name": "Nvme1", 00:35:34.041 "trtype": "tcp", 00:35:34.041 "traddr": "10.0.0.3", 00:35:34.041 "adrfam": "ipv4", 00:35:34.041 "trsvcid": "4420", 00:35:34.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:34.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:34.041 "hdgst": false, 00:35:34.041 "ddgst": false 00:35:34.041 }, 00:35:34.041 "method": "bdev_nvme_attach_controller" 00:35:34.041 }' 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:34.041 { 00:35:34.041 "params": { 00:35:34.041 "name": "Nvme$subsystem", 00:35:34.041 "trtype": "$TEST_TRANSPORT", 00:35:34.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.041 "adrfam": "ipv4", 00:35:34.041 "trsvcid": "$NVMF_PORT", 00:35:34.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.041 "hdgst": ${hdgst:-false}, 00:35:34.041 "ddgst": ${ddgst:-false} 00:35:34.041 }, 00:35:34.041 "method": "bdev_nvme_attach_controller" 00:35:34.041 } 00:35:34.041 EOF 00:35:34.041 )") 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:34.041 "params": { 00:35:34.041 "name": "Nvme1", 00:35:34.041 "trtype": "tcp", 00:35:34.041 "traddr": "10.0.0.3", 00:35:34.041 "adrfam": "ipv4", 00:35:34.041 "trsvcid": "4420", 00:35:34.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:34.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:34.041 "hdgst": false, 00:35:34.041 "ddgst": false 00:35:34.041 }, 00:35:34.041 "method": "bdev_nvme_attach_controller" 00:35:34.041 }' 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:34.041 "params": { 00:35:34.041 "name": "Nvme1", 00:35:34.041 "trtype": "tcp", 00:35:34.041 "traddr": "10.0.0.3", 00:35:34.041 "adrfam": "ipv4", 00:35:34.041 "trsvcid": "4420", 00:35:34.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:34.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:34.041 "hdgst": false, 00:35:34.041 "ddgst": false 00:35:34.041 }, 00:35:34.041 "method": "bdev_nvme_attach_controller" 00:35:34.041 }' 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:34.041 "params": { 00:35:34.041 "name": "Nvme1", 00:35:34.041 "trtype": "tcp", 00:35:34.041 "traddr": "10.0.0.3", 00:35:34.041 "adrfam": "ipv4", 00:35:34.041 "trsvcid": "4420", 00:35:34.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:34.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:34.041 "hdgst": false, 00:35:34.041 "ddgst": false 00:35:34.041 }, 00:35:34.041 "method": "bdev_nvme_attach_controller" 00:35:34.041 }' 00:35:34.041 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 114557 00:35:34.041 [2024-12-05 10:07:21.653104] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:35:34.041 [2024-12-05 10:07:21.654125] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:35:34.041 [2024-12-05 10:07:21.657522] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:35:34.041 [2024-12-05 10:07:21.657882] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:35:34.299 [2024-12-05 10:07:21.671575] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:35:34.299 [2024-12-05 10:07:21.671894] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:35:34.299 [2024-12-05 10:07:21.692224] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:35:34.299 [2024-12-05 10:07:21.692393] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:35:34.299 [2024-12-05 10:07:21.922853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.557 [2024-12-05 10:07:21.998111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.557 [2024-12-05 10:07:22.059111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:34.557 [2024-12-05 10:07:22.084830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.557 [2024-12-05 10:07:22.108362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:34.557 [2024-12-05 10:07:22.161872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.814 [2024-12-05 10:07:22.224285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:34.814 [2024-12-05 10:07:22.303399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:34.814 Running I/O for 1 seconds... 00:35:35.071 Running I/O for 1 seconds... 00:35:35.071 Running I/O for 1 seconds... 00:35:35.071 Running I/O for 1 seconds... 00:35:36.004 168448.00 IOPS, 658.00 MiB/s 00:35:36.004 Latency(us) 00:35:36.004 [2024-12-05T10:07:23.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.004 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:35:36.004 Nvme1n1 : 1.00 167992.17 656.22 0.00 0.00 757.93 599.51 2115.03 00:35:36.004 [2024-12-05T10:07:23.633Z] =================================================================================================================== 00:35:36.004 [2024-12-05T10:07:23.633Z] Total : 167992.17 656.22 0.00 0.00 757.93 599.51 2115.03 00:35:36.004 5028.00 IOPS, 19.64 MiB/s 00:35:36.004 Latency(us) 00:35:36.004 [2024-12-05T10:07:23.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.004 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:35:36.004 Nvme1n1 : 1.03 5030.25 19.65 0.00 0.00 25196.85 5510.98 45041.11 00:35:36.004 [2024-12-05T10:07:23.633Z] =================================================================================================================== 00:35:36.004 [2024-12-05T10:07:23.633Z] Total : 5030.25 19.65 0.00 0.00 25196.85 5510.98 45041.11 00:35:36.004 4891.00 IOPS, 19.11 MiB/s 00:35:36.004 Latency(us) 00:35:36.004 [2024-12-05T10:07:23.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.004 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:35:36.004 Nvme1n1 : 1.01 4996.96 19.52 0.00 0.00 25512.87 6613.18 47185.92 00:35:36.004 [2024-12-05T10:07:23.633Z] =================================================================================================================== 00:35:36.004 [2024-12-05T10:07:23.633Z] Total : 4996.96 19.52 0.00 0.00 25512.87 6613.18 47185.92 00:35:36.262 7742.00 IOPS, 30.24 MiB/s 00:35:36.262 Latency(us) 00:35:36.262 [2024-12-05T10:07:23.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.262 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:35:36.262 Nvme1n1 : 1.01 7822.24 30.56 0.00 0.00 16292.19 6553.60 25737.77 00:35:36.262 [2024-12-05T10:07:23.891Z] =================================================================================================================== 00:35:36.262 [2024-12-05T10:07:23.891Z] Total : 7822.24 30.56 0.00 0.00 16292.19 6553.60 25737.77 00:35:36.829 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 114559 00:35:36.829 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 114561 00:35:36.829 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 114564 00:35:36.829 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:36.829 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.829 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:36.829 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.829 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:35:36.829 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:35:36.829 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:36.829 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:35:36.829 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:36.829 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:35:36.829 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:36.829 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:36.829 rmmod nvme_tcp 00:35:36.829 rmmod nvme_fabrics 00:35:36.829 rmmod nvme_keyring 00:35:36.829 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:36.829 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:35:36.830 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:35:36.830 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 114504 ']' 00:35:36.830 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 114504 00:35:36.830 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 114504 ']' 00:35:36.830 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 114504 00:35:36.830 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:35:36.830 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:36.830 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114504 00:35:37.088 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:37.088 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:37.088 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114504' 00:35:37.088 killing process with pid 114504 00:35:37.088 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 114504 00:35:37.088 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 114504 00:35:37.655 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:37.655 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:37.655 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:37.655 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:35:37.655 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:37.914 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.181 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:35:38.181 00:35:38.181 real 0m6.279s 00:35:38.181 user 0m20.958s 00:35:38.181 sys 0m3.285s 00:35:38.181 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:38.181 ************************************ 00:35:38.181 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:38.181 END TEST nvmf_bdev_io_wait 00:35:38.181 ************************************ 00:35:38.181 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:38.181 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:38.181 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:38.181 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:38.181 ************************************ 00:35:38.181 START TEST nvmf_queue_depth 00:35:38.181 ************************************ 00:35:38.181 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:38.181 * Looking for test storage... 00:35:38.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:38.181 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:38.181 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:35:38.181 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:38.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.182 --rc genhtml_branch_coverage=1 00:35:38.182 --rc genhtml_function_coverage=1 00:35:38.182 --rc genhtml_legend=1 00:35:38.182 --rc geninfo_all_blocks=1 00:35:38.182 --rc geninfo_unexecuted_blocks=1 00:35:38.182 00:35:38.182 ' 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:38.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.182 --rc genhtml_branch_coverage=1 00:35:38.182 --rc genhtml_function_coverage=1 00:35:38.182 --rc genhtml_legend=1 00:35:38.182 --rc geninfo_all_blocks=1 00:35:38.182 --rc geninfo_unexecuted_blocks=1 00:35:38.182 00:35:38.182 ' 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:38.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.182 --rc genhtml_branch_coverage=1 00:35:38.182 --rc genhtml_function_coverage=1 00:35:38.182 --rc genhtml_legend=1 00:35:38.182 --rc geninfo_all_blocks=1 00:35:38.182 --rc geninfo_unexecuted_blocks=1 00:35:38.182 00:35:38.182 ' 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:38.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.182 --rc genhtml_branch_coverage=1 00:35:38.182 --rc genhtml_function_coverage=1 00:35:38.182 --rc genhtml_legend=1 00:35:38.182 --rc geninfo_all_blocks=1 00:35:38.182 --rc geninfo_unexecuted_blocks=1 00:35:38.182 00:35:38.182 ' 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:38.182 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:35:38.440 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:35:38.441 Cannot find device "nvmf_init_br" 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:35:38.441 Cannot find device "nvmf_init_br2" 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:35:38.441 Cannot find device "nvmf_tgt_br" 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:35:38.441 Cannot find device "nvmf_tgt_br2" 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:35:38.441 Cannot find device "nvmf_init_br" 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:35:38.441 Cannot find device "nvmf_init_br2" 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:35:38.441 Cannot find device "nvmf_tgt_br" 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:35:38.441 Cannot find device "nvmf_tgt_br2" 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:35:38.441 Cannot find device "nvmf_br" 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:35:38.441 Cannot find device "nvmf_init_if" 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:35:38.441 Cannot find device "nvmf_init_if2" 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:38.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:38.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:35:38.441 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:35:38.441 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:38.441 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:35:38.441 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:38.441 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:38.441 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:38.441 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:35:38.700 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:38.700 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:35:38.700 00:35:38.700 --- 10.0.0.3 ping statistics --- 00:35:38.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.700 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:35:38.700 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:35:38.700 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:35:38.700 00:35:38.700 --- 10.0.0.4 ping statistics --- 00:35:38.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.700 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:38.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:38.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:35:38.700 00:35:38.700 --- 10.0.0.1 ping statistics --- 00:35:38.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.700 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:35:38.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:38.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:35:38.700 00:35:38.700 --- 10.0.0.2 ping statistics --- 00:35:38.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.700 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:38.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=114865 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 114865 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 114865 ']' 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:38.700 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:38.958 [2024-12-05 10:07:26.402786] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:38.959 [2024-12-05 10:07:26.406275] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:35:38.959 [2024-12-05 10:07:26.406583] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:39.217 [2024-12-05 10:07:26.606864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.217 [2024-12-05 10:07:26.772301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:39.217 [2024-12-05 10:07:26.772703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:39.217 [2024-12-05 10:07:26.772921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:39.217 [2024-12-05 10:07:26.773218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:39.217 [2024-12-05 10:07:26.773252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:39.217 [2024-12-05 10:07:26.774953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:39.782 [2024-12-05 10:07:27.131432] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:39.782 [2024-12-05 10:07:27.131784] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:39.783 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:39.783 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:35:39.783 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:39.783 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:39.783 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:40.040 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:40.041 [2024-12-05 10:07:27.444528] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:40.041 Malloc0 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:40.041 [2024-12-05 10:07:27.560352] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=114914 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 114914 /var/tmp/bdevperf.sock 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 114914 ']' 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:40.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:40.041 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:40.299 [2024-12-05 10:07:27.691782] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:35:40.299 [2024-12-05 10:07:27.692248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114914 ] 00:35:40.299 [2024-12-05 10:07:27.876860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.559 [2024-12-05 10:07:27.973036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:41.127 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:41.127 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:35:41.127 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:41.127 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.127 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:41.127 NVMe0n1 00:35:41.127 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.127 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:41.127 Running I/O for 10 seconds... 00:35:43.443 8192.00 IOPS, 32.00 MiB/s [2024-12-05T10:07:32.011Z] 8199.00 IOPS, 32.03 MiB/s [2024-12-05T10:07:32.949Z] 8427.67 IOPS, 32.92 MiB/s [2024-12-05T10:07:33.886Z] 8493.25 IOPS, 33.18 MiB/s [2024-12-05T10:07:34.824Z] 8604.20 IOPS, 33.61 MiB/s [2024-12-05T10:07:35.758Z] 8651.50 IOPS, 33.79 MiB/s [2024-12-05T10:07:37.133Z] 8677.00 IOPS, 33.89 MiB/s [2024-12-05T10:07:37.773Z] 8742.00 IOPS, 34.15 MiB/s [2024-12-05T10:07:39.158Z] 8796.22 IOPS, 34.36 MiB/s [2024-12-05T10:07:39.158Z] 8823.80 IOPS, 34.47 MiB/s 00:35:51.529 Latency(us) 00:35:51.529 [2024-12-05T10:07:39.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.529 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:35:51.529 Verification LBA range: start 0x0 length 0x4000 00:35:51.529 NVMe0n1 : 10.07 8861.77 34.62 0.00 0.00 115069.35 17873.45 76736.70 00:35:51.529 [2024-12-05T10:07:39.158Z] =================================================================================================================== 00:35:51.529 [2024-12-05T10:07:39.158Z] Total : 8861.77 34.62 0.00 0.00 115069.35 17873.45 76736.70 00:35:51.529 { 00:35:51.529 "results": [ 00:35:51.529 { 00:35:51.529 "job": "NVMe0n1", 00:35:51.529 "core_mask": "0x1", 00:35:51.529 "workload": "verify", 00:35:51.529 "status": "finished", 00:35:51.529 "verify_range": { 00:35:51.529 "start": 0, 00:35:51.529 "length": 16384 00:35:51.529 }, 00:35:51.529 "queue_depth": 1024, 00:35:51.529 "io_size": 4096, 00:35:51.529 "runtime": 10.072706, 00:35:51.529 "iops": 8861.769617816703, 00:35:51.529 "mibps": 34.616287569596494, 00:35:51.529 "io_failed": 0, 00:35:51.529 "io_timeout": 0, 00:35:51.529 "avg_latency_us": 115069.35322590698, 00:35:51.529 "min_latency_us": 17873.454545454544, 00:35:51.529 "max_latency_us": 76736.69818181818 00:35:51.529 } 00:35:51.529 ], 00:35:51.529 "core_count": 1 00:35:51.529 } 00:35:51.529 10:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 114914 00:35:51.529 10:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 114914 ']' 00:35:51.529 10:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 114914 00:35:51.529 10:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:35:51.529 10:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:51.529 10:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114914 00:35:51.529 killing process with pid 114914 00:35:51.529 Received shutdown signal, test time was about 10.000000 seconds 00:35:51.529 00:35:51.529 Latency(us) 00:35:51.529 [2024-12-05T10:07:39.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.529 [2024-12-05T10:07:39.158Z] =================================================================================================================== 00:35:51.529 [2024-12-05T10:07:39.158Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:51.529 10:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:51.529 10:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:51.529 10:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114914' 00:35:51.529 10:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 114914 00:35:51.529 10:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 114914 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:52.097 rmmod nvme_tcp 00:35:52.097 rmmod nvme_fabrics 00:35:52.097 rmmod nvme_keyring 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 114865 ']' 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 114865 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 114865 ']' 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 114865 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114865 00:35:52.097 killing process with pid 114865 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114865' 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 114865 00:35:52.097 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 114865 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:35:53.475 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:53.475 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:53.475 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:35:53.475 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.475 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:53.475 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:53.475 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:35:53.475 00:35:53.475 real 0m15.463s 00:35:53.475 user 0m24.334s 00:35:53.475 sys 0m2.978s 00:35:53.475 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:53.475 ************************************ 00:35:53.475 END TEST nvmf_queue_depth 00:35:53.475 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:53.475 ************************************ 00:35:53.735 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:53.735 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:53.735 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:53.736 ************************************ 00:35:53.736 START TEST nvmf_target_multipath 00:35:53.736 ************************************ 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:53.736 * Looking for test storage... 00:35:53.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:53.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.736 --rc genhtml_branch_coverage=1 00:35:53.736 --rc genhtml_function_coverage=1 00:35:53.736 --rc genhtml_legend=1 00:35:53.736 --rc geninfo_all_blocks=1 00:35:53.736 --rc geninfo_unexecuted_blocks=1 00:35:53.736 00:35:53.736 ' 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:53.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.736 --rc genhtml_branch_coverage=1 00:35:53.736 --rc genhtml_function_coverage=1 00:35:53.736 --rc genhtml_legend=1 00:35:53.736 --rc geninfo_all_blocks=1 00:35:53.736 --rc geninfo_unexecuted_blocks=1 00:35:53.736 00:35:53.736 ' 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:53.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.736 --rc genhtml_branch_coverage=1 00:35:53.736 --rc genhtml_function_coverage=1 00:35:53.736 --rc genhtml_legend=1 00:35:53.736 --rc geninfo_all_blocks=1 00:35:53.736 --rc geninfo_unexecuted_blocks=1 00:35:53.736 00:35:53.736 ' 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:53.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.736 --rc genhtml_branch_coverage=1 00:35:53.736 --rc genhtml_function_coverage=1 00:35:53.736 --rc genhtml_legend=1 00:35:53.736 --rc geninfo_all_blocks=1 00:35:53.736 --rc geninfo_unexecuted_blocks=1 00:35:53.736 00:35:53.736 ' 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:53.736 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:53.737 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:35:53.737 Cannot find device "nvmf_init_br" 00:35:53.738 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:35:53.738 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:35:53.738 Cannot find device "nvmf_init_br2" 00:35:53.738 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:35:53.738 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:35:53.997 Cannot find device "nvmf_tgt_br" 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:35:53.997 Cannot find device "nvmf_tgt_br2" 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:35:53.997 Cannot find device "nvmf_init_br" 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:35:53.997 Cannot find device "nvmf_init_br2" 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:35:53.997 Cannot find device "nvmf_tgt_br" 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:35:53.997 Cannot find device "nvmf_tgt_br2" 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:35:53.997 Cannot find device "nvmf_br" 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:35:53.997 Cannot find device "nvmf_init_if" 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:35:53.997 Cannot find device "nvmf_init_if2" 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:53.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:53.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:35:53.997 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:35:54.257 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:54.257 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:35:54.257 00:35:54.257 --- 10.0.0.3 ping statistics --- 00:35:54.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:54.257 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:35:54.257 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:35:54.257 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:35:54.257 00:35:54.257 --- 10.0.0.4 ping statistics --- 00:35:54.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:54.257 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:54.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:54.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:35:54.257 00:35:54.257 --- 10.0.0.1 ping statistics --- 00:35:54.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:54.257 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:35:54.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:54.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:35:54.257 00:35:54.257 --- 10.0.0.2 ping statistics --- 00:35:54.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:54.257 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=115312 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 115312 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 115312 ']' 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:54.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:54.257 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:54.257 [2024-12-05 10:07:41.862742] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:54.257 [2024-12-05 10:07:41.865777] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:35:54.257 [2024-12-05 10:07:41.865902] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:54.516 [2024-12-05 10:07:42.060298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:54.774 [2024-12-05 10:07:42.216160] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:54.774 [2024-12-05 10:07:42.216251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:54.774 [2024-12-05 10:07:42.216274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:54.774 [2024-12-05 10:07:42.216294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:54.774 [2024-12-05 10:07:42.216309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:54.774 [2024-12-05 10:07:42.219002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:54.774 [2024-12-05 10:07:42.219116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:54.774 [2024-12-05 10:07:42.219273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:54.774 [2024-12-05 10:07:42.219284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:55.033 [2024-12-05 10:07:42.550007] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:55.033 [2024-12-05 10:07:42.550829] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:55.033 [2024-12-05 10:07:42.552162] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:55.033 [2024-12-05 10:07:42.552165] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:55.033 [2024-12-05 10:07:42.552532] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:55.292 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:55.292 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:35:55.292 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:55.292 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:55.292 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:55.292 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:55.292 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:55.550 [2024-12-05 10:07:43.089667] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:55.550 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:35:56.117 Malloc0 00:35:56.117 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:35:56.117 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:56.684 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:35:56.684 [2024-12-05 10:07:44.285805] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:56.684 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:35:56.942 [2024-12-05 10:07:44.525650] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:35:56.942 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:35:57.201 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:35:57.201 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:35:57.201 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:35:57.201 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:57.201 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:57.201 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=115446 00:35:59.730 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:35:59.731 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:35:59.731 [global] 00:35:59.731 thread=1 00:35:59.731 invalidate=1 00:35:59.731 rw=randrw 00:35:59.731 time_based=1 00:35:59.731 runtime=6 00:35:59.731 ioengine=libaio 00:35:59.731 direct=1 00:35:59.731 bs=4096 00:35:59.731 iodepth=128 00:35:59.731 norandommap=0 00:35:59.731 numjobs=1 00:35:59.731 00:35:59.731 verify_dump=1 00:35:59.731 verify_backlog=512 00:35:59.731 verify_state_save=0 00:35:59.731 do_verify=1 00:35:59.731 verify=crc32c-intel 00:35:59.731 [job0] 00:35:59.731 filename=/dev/nvme0n1 00:35:59.731 Could not set queue depth (nvme0n1) 00:35:59.731 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:59.731 fio-3.35 00:35:59.731 Starting 1 thread 00:36:00.297 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:36:00.554 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:36:00.812 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:36:00.812 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:36:00.812 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:36:00.812 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:36:00.812 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:36:00.812 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:36:00.813 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:36:00.813 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:36:00.813 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:36:00.813 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:36:00.813 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:00.813 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:36:00.813 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:36:02.189 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:36:02.189 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:02.189 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:36:02.189 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:36:02.189 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:36:02.448 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:36:02.448 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:36:02.448 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:36:02.448 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:36:02.448 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:36:02.448 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:36:02.448 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:36:02.448 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:36:02.448 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:36:02.448 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:36:02.448 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:02.448 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:36:02.448 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:36:03.385 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:36:03.385 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:03.385 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:36:03.385 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 115446 00:36:05.919 00:36:05.919 job0: (groupid=0, jobs=1): err= 0: pid=115470: Thu Dec 5 10:07:53 2024 00:36:05.919 read: IOPS=9819, BW=38.4MiB/s (40.2MB/s)(230MiB/6006msec) 00:36:05.919 slat (usec): min=4, max=6758, avg=58.62, stdev=284.86 00:36:05.919 clat (usec): min=2542, max=18512, avg=8632.22, stdev=1481.70 00:36:05.919 lat (usec): min=2550, max=18521, avg=8690.84, stdev=1497.23 00:36:05.919 clat percentiles (usec): 00:36:05.919 | 1.00th=[ 5276], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 7570], 00:36:05.919 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 8586], 60.00th=[ 8848], 00:36:05.919 | 70.00th=[ 9241], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[11207], 00:36:05.919 | 99.00th=[12911], 99.50th=[13566], 99.90th=[14877], 99.95th=[15533], 00:36:05.919 | 99.99th=[17433] 00:36:05.919 bw ( KiB/s): min= 7816, max=24920, per=52.02%, avg=20434.91, stdev=6191.57, samples=11 00:36:05.919 iops : min= 1954, max= 6230, avg=5108.73, stdev=1547.89, samples=11 00:36:05.919 write: IOPS=5791, BW=22.6MiB/s (23.7MB/s)(122MiB/5398msec); 0 zone resets 00:36:05.919 slat (usec): min=14, max=4725, avg=71.34, stdev=183.32 00:36:05.919 clat (usec): min=1116, max=18209, avg=8085.75, stdev=1147.75 00:36:05.919 lat (usec): min=1139, max=18232, avg=8157.09, stdev=1151.42 00:36:05.919 clat percentiles (usec): 00:36:05.919 | 1.00th=[ 4359], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 7439], 00:36:05.919 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8291], 00:36:05.919 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9110], 95.00th=[ 9634], 00:36:05.919 | 99.00th=[11469], 99.50th=[12256], 99.90th=[15401], 99.95th=[16450], 00:36:05.919 | 99.99th=[17957] 00:36:05.919 bw ( KiB/s): min= 7944, max=24824, per=88.48%, avg=20498.27, stdev=6151.46, samples=11 00:36:05.919 iops : min= 1986, max= 6206, avg=5124.55, stdev=1537.85, samples=11 00:36:05.919 lat (msec) : 2=0.01%, 4=0.37%, 10=89.13%, 20=10.50% 00:36:05.919 cpu : usr=4.83%, sys=20.51%, ctx=6732, majf=0, minf=90 00:36:05.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:36:05.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:05.919 issued rwts: total=58977,31263,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:05.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:05.919 00:36:05.919 Run status group 0 (all jobs): 00:36:05.919 READ: bw=38.4MiB/s (40.2MB/s), 38.4MiB/s-38.4MiB/s (40.2MB/s-40.2MB/s), io=230MiB (242MB), run=6006-6006msec 00:36:05.919 WRITE: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=122MiB (128MB), run=5398-5398msec 00:36:05.919 00:36:05.919 Disk stats (read/write): 00:36:05.919 nvme0n1: ios=58123/30710, merge=0/0, ticks=471670/237620, in_queue=709290, util=98.73% 00:36:05.919 10:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:36:05.919 10:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:36:06.177 10:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:36:06.177 10:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:36:06.177 10:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:36:06.177 10:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:36:06.177 10:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:36:06.177 10:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:36:06.177 10:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:36:06.177 10:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:36:06.177 10:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:36:06.177 10:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:36:06.177 10:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:06.177 10:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:36:06.177 10:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:36:07.111 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:36:07.111 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:07.111 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:36:07.111 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:36:07.111 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=115592 00:36:07.111 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:36:07.111 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:36:07.111 [global] 00:36:07.111 thread=1 00:36:07.111 invalidate=1 00:36:07.111 rw=randrw 00:36:07.111 time_based=1 00:36:07.111 runtime=6 00:36:07.111 ioengine=libaio 00:36:07.111 direct=1 00:36:07.111 bs=4096 00:36:07.111 iodepth=128 00:36:07.111 norandommap=0 00:36:07.111 numjobs=1 00:36:07.111 00:36:07.111 verify_dump=1 00:36:07.111 verify_backlog=512 00:36:07.111 verify_state_save=0 00:36:07.111 do_verify=1 00:36:07.111 verify=crc32c-intel 00:36:07.111 [job0] 00:36:07.111 filename=/dev/nvme0n1 00:36:07.369 Could not set queue depth (nvme0n1) 00:36:07.369 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:07.369 fio-3.35 00:36:07.369 Starting 1 thread 00:36:08.344 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:36:08.602 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:36:08.860 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:36:08.860 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:36:08.860 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:36:08.860 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:36:08.860 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:36:08.860 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:36:08.860 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:36:08.860 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:36:08.860 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:36:08.860 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:36:08.860 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:08.860 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:36:08.860 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:36:09.795 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:36:09.795 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:09.795 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:36:09.795 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:36:10.054 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:36:10.314 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:36:10.314 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:36:10.314 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:36:10.314 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:36:10.314 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:36:10.314 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:36:10.314 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:36:10.314 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:36:10.314 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:36:10.314 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:36:10.314 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:10.314 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:36:10.314 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:36:11.249 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:36:11.249 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:11.249 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:36:11.249 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 115592 00:36:13.782 00:36:13.782 job0: (groupid=0, jobs=1): err= 0: pid=115613: Thu Dec 5 10:08:01 2024 00:36:13.782 read: IOPS=10.3k, BW=40.4MiB/s (42.3MB/s)(242MiB/6003msec) 00:36:13.782 slat (usec): min=2, max=6210, avg=48.02, stdev=251.15 00:36:13.782 clat (usec): min=461, max=20082, avg=8263.33, stdev=1967.02 00:36:13.782 lat (usec): min=482, max=20099, avg=8311.35, stdev=1977.87 00:36:13.782 clat percentiles (usec): 00:36:13.782 | 1.00th=[ 2802], 5.00th=[ 5014], 10.00th=[ 5932], 20.00th=[ 7046], 00:36:13.782 | 30.00th=[ 7570], 40.00th=[ 7898], 50.00th=[ 8291], 60.00th=[ 8586], 00:36:13.782 | 70.00th=[ 8979], 80.00th=[ 9503], 90.00th=[10552], 95.00th=[11469], 00:36:13.782 | 99.00th=[13566], 99.50th=[14746], 99.90th=[17171], 99.95th=[17695], 00:36:13.782 | 99.99th=[19530] 00:36:13.782 bw ( KiB/s): min=13488, max=27489, per=53.07%, avg=21934.64, stdev=4236.43, samples=11 00:36:13.782 iops : min= 3372, max= 6872, avg=5483.64, stdev=1059.07, samples=11 00:36:13.782 write: IOPS=5961, BW=23.3MiB/s (24.4MB/s)(129MiB/5542msec); 0 zone resets 00:36:13.782 slat (usec): min=11, max=3054, avg=59.47, stdev=152.63 00:36:13.782 clat (usec): min=445, max=16254, avg=7537.37, stdev=1600.90 00:36:13.782 lat (usec): min=476, max=16277, avg=7596.85, stdev=1607.38 00:36:13.782 clat percentiles (usec): 00:36:13.782 | 1.00th=[ 3130], 5.00th=[ 4228], 10.00th=[ 5080], 20.00th=[ 6783], 00:36:13.782 | 30.00th=[ 7242], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 8029], 00:36:13.782 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[ 9634], 00:36:13.782 | 99.00th=[11469], 99.50th=[12518], 99.90th=[14746], 99.95th=[15139], 00:36:13.782 | 99.99th=[15795] 00:36:13.782 bw ( KiB/s): min=14128, max=28487, per=91.95%, avg=21924.27, stdev=4183.92, samples=11 00:36:13.782 iops : min= 3532, max= 7121, avg=5481.00, stdev=1045.86, samples=11 00:36:13.782 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:36:13.782 lat (msec) : 2=0.27%, 4=2.89%, 10=85.93%, 20=10.87%, 50=0.01% 00:36:13.782 cpu : usr=5.28%, sys=21.05%, ctx=7423, majf=0, minf=66 00:36:13.782 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:36:13.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:13.782 issued rwts: total=62021,33036,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:13.782 00:36:13.782 Run status group 0 (all jobs): 00:36:13.782 READ: bw=40.4MiB/s (42.3MB/s), 40.4MiB/s-40.4MiB/s (42.3MB/s-42.3MB/s), io=242MiB (254MB), run=6003-6003msec 00:36:13.782 WRITE: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=129MiB (135MB), run=5542-5542msec 00:36:13.782 00:36:13.782 Disk stats (read/write): 00:36:13.782 nvme0n1: ios=61029/32524, merge=0/0, ticks=477031/234910, in_queue=711941, util=98.63% 00:36:13.782 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:13.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:36:13.782 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:13.782 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:36:13.782 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:13.782 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:13.782 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:13.782 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:13.782 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:36:13.782 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:14.040 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:36:14.040 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:36:14.040 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:36:14.040 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:36:14.040 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:14.040 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:36:14.040 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:14.040 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:36:14.040 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:14.040 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:14.040 rmmod nvme_tcp 00:36:14.040 rmmod nvme_fabrics 00:36:14.040 rmmod nvme_keyring 00:36:14.040 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:14.040 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:36:14.041 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:36:14.041 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 115312 ']' 00:36:14.041 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 115312 00:36:14.041 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 115312 ']' 00:36:14.041 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 115312 00:36:14.041 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:36:14.041 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:14.041 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115312 00:36:14.041 killing process with pid 115312 00:36:14.041 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:14.041 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:14.041 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115312' 00:36:14.041 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 115312 00:36:14.041 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 115312 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:36:15.418 ************************************ 00:36:15.418 END TEST nvmf_target_multipath 00:36:15.418 ************************************ 00:36:15.418 00:36:15.418 real 0m21.794s 00:36:15.418 user 1m12.448s 00:36:15.418 sys 0m8.297s 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:15.418 ************************************ 00:36:15.418 START TEST nvmf_zcopy 00:36:15.418 ************************************ 00:36:15.418 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:36:15.679 * Looking for test storage... 00:36:15.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:15.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.679 --rc genhtml_branch_coverage=1 00:36:15.679 --rc genhtml_function_coverage=1 00:36:15.679 --rc genhtml_legend=1 00:36:15.679 --rc geninfo_all_blocks=1 00:36:15.679 --rc geninfo_unexecuted_blocks=1 00:36:15.679 00:36:15.679 ' 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:15.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.679 --rc genhtml_branch_coverage=1 00:36:15.679 --rc genhtml_function_coverage=1 00:36:15.679 --rc genhtml_legend=1 00:36:15.679 --rc geninfo_all_blocks=1 00:36:15.679 --rc geninfo_unexecuted_blocks=1 00:36:15.679 00:36:15.679 ' 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:15.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.679 --rc genhtml_branch_coverage=1 00:36:15.679 --rc genhtml_function_coverage=1 00:36:15.679 --rc genhtml_legend=1 00:36:15.679 --rc geninfo_all_blocks=1 00:36:15.679 --rc geninfo_unexecuted_blocks=1 00:36:15.679 00:36:15.679 ' 00:36:15.679 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:15.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.679 --rc genhtml_branch_coverage=1 00:36:15.679 --rc genhtml_function_coverage=1 00:36:15.679 --rc genhtml_legend=1 00:36:15.679 --rc geninfo_all_blocks=1 00:36:15.679 --rc geninfo_unexecuted_blocks=1 00:36:15.679 00:36:15.680 ' 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:15.680 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:15.681 Cannot find device "nvmf_init_br" 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:15.681 Cannot find device "nvmf_init_br2" 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:15.681 Cannot find device "nvmf_tgt_br" 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:15.681 Cannot find device "nvmf_tgt_br2" 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:15.681 Cannot find device "nvmf_init_br" 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:15.681 Cannot find device "nvmf_init_br2" 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:36:15.681 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:15.940 Cannot find device "nvmf_tgt_br" 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:15.940 Cannot find device "nvmf_tgt_br2" 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:15.940 Cannot find device "nvmf_br" 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:15.940 Cannot find device "nvmf_init_if" 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:15.940 Cannot find device "nvmf_init_if2" 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:15.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:15.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:15.940 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:15.941 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:15.941 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:15.941 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:15.941 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:15.941 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:15.941 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:15.941 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:15.941 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:16.200 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:16.200 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:36:16.200 00:36:16.200 --- 10.0.0.3 ping statistics --- 00:36:16.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:16.200 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:16.200 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:16.200 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:36:16.200 00:36:16.200 --- 10.0.0.4 ping statistics --- 00:36:16.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:16.200 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:16.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:16.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:36:16.200 00:36:16.200 --- 10.0.0.1 ping statistics --- 00:36:16.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:16.200 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:16.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:16.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:36:16.200 00:36:16.200 --- 10.0.0.2 ping statistics --- 00:36:16.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:16.200 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=115945 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 115945 00:36:16.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 115945 ']' 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:16.200 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:16.200 [2024-12-05 10:08:03.773355] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:16.200 [2024-12-05 10:08:03.775840] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:36:16.201 [2024-12-05 10:08:03.775998] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:16.460 [2024-12-05 10:08:03.956175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:16.720 [2024-12-05 10:08:04.118143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:16.720 [2024-12-05 10:08:04.118546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:16.720 [2024-12-05 10:08:04.118602] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:16.720 [2024-12-05 10:08:04.118626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:16.720 [2024-12-05 10:08:04.118642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:16.720 [2024-12-05 10:08:04.120289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.980 [2024-12-05 10:08:04.467681] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:16.980 [2024-12-05 10:08:04.468225] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:17.239 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:17.239 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:36:17.239 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:17.239 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:17.239 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:17.239 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:17.239 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:36:17.239 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:36:17.239 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.239 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:17.239 [2024-12-05 10:08:04.845828] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:17.239 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.239 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:17.239 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.239 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:17.239 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.239 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:36:17.239 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:17.498 [2024-12-05 10:08:04.870206] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:17.498 malloc0 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:17.498 { 00:36:17.498 "params": { 00:36:17.498 "name": "Nvme$subsystem", 00:36:17.498 "trtype": "$TEST_TRANSPORT", 00:36:17.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:17.498 "adrfam": "ipv4", 00:36:17.498 "trsvcid": "$NVMF_PORT", 00:36:17.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:17.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:17.498 "hdgst": ${hdgst:-false}, 00:36:17.498 "ddgst": ${ddgst:-false} 00:36:17.498 }, 00:36:17.498 "method": "bdev_nvme_attach_controller" 00:36:17.498 } 00:36:17.498 EOF 00:36:17.498 )") 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:36:17.498 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:17.498 "params": { 00:36:17.498 "name": "Nvme1", 00:36:17.498 "trtype": "tcp", 00:36:17.498 "traddr": "10.0.0.3", 00:36:17.498 "adrfam": "ipv4", 00:36:17.498 "trsvcid": "4420", 00:36:17.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:17.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:17.498 "hdgst": false, 00:36:17.498 "ddgst": false 00:36:17.498 }, 00:36:17.498 "method": "bdev_nvme_attach_controller" 00:36:17.498 }' 00:36:17.498 [2024-12-05 10:08:05.045627] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:36:17.498 [2024-12-05 10:08:05.045766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116002 ] 00:36:17.756 [2024-12-05 10:08:05.223282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:17.756 [2024-12-05 10:08:05.366639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:18.320 Running I/O for 10 seconds... 00:36:20.233 5352.00 IOPS, 41.81 MiB/s [2024-12-05T10:08:08.799Z] 5425.50 IOPS, 42.39 MiB/s [2024-12-05T10:08:10.174Z] 5467.67 IOPS, 42.72 MiB/s [2024-12-05T10:08:11.111Z] 5500.25 IOPS, 42.97 MiB/s [2024-12-05T10:08:12.048Z] 5506.20 IOPS, 43.02 MiB/s [2024-12-05T10:08:12.986Z] 5507.50 IOPS, 43.03 MiB/s [2024-12-05T10:08:13.923Z] 5514.29 IOPS, 43.08 MiB/s [2024-12-05T10:08:14.859Z] 5525.38 IOPS, 43.17 MiB/s [2024-12-05T10:08:15.793Z] 5528.67 IOPS, 43.19 MiB/s [2024-12-05T10:08:15.793Z] 5530.90 IOPS, 43.21 MiB/s 00:36:28.164 Latency(us) 00:36:28.164 [2024-12-05T10:08:15.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:28.164 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:36:28.164 Verification LBA range: start 0x0 length 0x1000 00:36:28.164 Nvme1n1 : 10.02 5533.12 43.23 0.00 0.00 23071.08 2859.75 35508.60 00:36:28.164 [2024-12-05T10:08:15.793Z] =================================================================================================================== 00:36:28.164 [2024-12-05T10:08:15.793Z] Total : 5533.12 43.23 0.00 0.00 23071.08 2859.75 35508.60 00:36:29.098 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=116120 00:36:29.099 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:36:29.099 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:36:29.099 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:29.099 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:36:29.099 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:36:29.099 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:36:29.099 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:29.099 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:29.099 { 00:36:29.099 "params": { 00:36:29.099 "name": "Nvme$subsystem", 00:36:29.099 "trtype": "$TEST_TRANSPORT", 00:36:29.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:29.099 "adrfam": "ipv4", 00:36:29.099 "trsvcid": "$NVMF_PORT", 00:36:29.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:29.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:29.099 "hdgst": ${hdgst:-false}, 00:36:29.099 "ddgst": ${ddgst:-false} 00:36:29.099 }, 00:36:29.099 "method": "bdev_nvme_attach_controller" 00:36:29.099 } 00:36:29.099 EOF 00:36:29.099 )") 00:36:29.099 [2024-12-05 10:08:16.641569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.099 [2024-12-05 10:08:16.641636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.099 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:36:29.099 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.099 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:36:29.099 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:36:29.099 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:29.099 "params": { 00:36:29.099 "name": "Nvme1", 00:36:29.099 "trtype": "tcp", 00:36:29.099 "traddr": "10.0.0.3", 00:36:29.099 "adrfam": "ipv4", 00:36:29.099 "trsvcid": "4420", 00:36:29.099 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:29.099 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:29.099 "hdgst": false, 00:36:29.099 "ddgst": false 00:36:29.099 }, 00:36:29.099 "method": "bdev_nvme_attach_controller" 00:36:29.099 }' 00:36:29.099 [2024-12-05 10:08:16.653438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.099 [2024-12-05 10:08:16.653475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.099 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.099 [2024-12-05 10:08:16.665460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.099 [2024-12-05 10:08:16.665497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.099 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.099 [2024-12-05 10:08:16.677397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.099 [2024-12-05 10:08:16.677431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.099 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.099 [2024-12-05 10:08:16.689426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.099 [2024-12-05 10:08:16.689462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.099 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.099 [2024-12-05 10:08:16.701421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.099 [2024-12-05 10:08:16.701457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.099 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.099 [2024-12-05 10:08:16.713412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.099 [2024-12-05 10:08:16.713447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.099 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.099 [2024-12-05 10:08:16.725489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.099 [2024-12-05 10:08:16.725540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.737454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.737676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.749407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.749444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.759248] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:36:29.358 [2024-12-05 10:08:16.759373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116120 ] 00:36:29.358 [2024-12-05 10:08:16.761438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.761475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.773399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.773434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.785424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.785460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.797427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.797463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.809408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.809442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.821420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.821455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.833419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.833453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.845407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.845441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.857413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.857449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.869396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.869425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.881466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.881496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.893414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.893444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.905402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.905431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.917420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.917454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.929418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.929448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 [2024-12-05 10:08:16.930172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.941411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.941439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.953414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.953442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.965395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.965423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.358 [2024-12-05 10:08:16.977418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.358 [2024-12-05 10:08:16.977447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.358 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.617 [2024-12-05 10:08:16.989473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.617 [2024-12-05 10:08:16.989503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.617 2024/12/05 10:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.617 [2024-12-05 10:08:17.001392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.617 [2024-12-05 10:08:17.001427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.617 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.617 [2024-12-05 10:08:17.013417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.617 [2024-12-05 10:08:17.013454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.617 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.617 [2024-12-05 10:08:17.025429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.617 [2024-12-05 10:08:17.025459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.617 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.617 [2024-12-05 10:08:17.037392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.617 [2024-12-05 10:08:17.037419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.617 [2024-12-05 10:08:17.041289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:29.617 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.617 [2024-12-05 10:08:17.049430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.617 [2024-12-05 10:08:17.049460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.617 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.617 [2024-12-05 10:08:17.061436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.617 [2024-12-05 10:08:17.061464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.617 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.617 [2024-12-05 10:08:17.073416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.617 [2024-12-05 10:08:17.073445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.617 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.617 [2024-12-05 10:08:17.085455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.617 [2024-12-05 10:08:17.085484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.617 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.617 [2024-12-05 10:08:17.097403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.617 [2024-12-05 10:08:17.097437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.617 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.617 [2024-12-05 10:08:17.109412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.617 [2024-12-05 10:08:17.109441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.617 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.617 [2024-12-05 10:08:17.121416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.617 [2024-12-05 10:08:17.121446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.617 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.617 [2024-12-05 10:08:17.133396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.617 [2024-12-05 10:08:17.133430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.617 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.617 [2024-12-05 10:08:17.145426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.617 [2024-12-05 10:08:17.145455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.617 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.617 [2024-12-05 10:08:17.157397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.617 [2024-12-05 10:08:17.157426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.617 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.617 [2024-12-05 10:08:17.169452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.617 [2024-12-05 10:08:17.169484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.617 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.617 [2024-12-05 10:08:17.181433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.617 [2024-12-05 10:08:17.181462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.618 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.618 [2024-12-05 10:08:17.193398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.618 [2024-12-05 10:08:17.193427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.618 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.618 [2024-12-05 10:08:17.205416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.618 [2024-12-05 10:08:17.205446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.618 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.618 [2024-12-05 10:08:17.217415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.618 [2024-12-05 10:08:17.217445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.618 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.618 [2024-12-05 10:08:17.229395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.618 [2024-12-05 10:08:17.229422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.618 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.618 [2024-12-05 10:08:17.241466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.618 [2024-12-05 10:08:17.241496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.877 [2024-12-05 10:08:17.253390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.877 [2024-12-05 10:08:17.253422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.877 [2024-12-05 10:08:17.265425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.877 [2024-12-05 10:08:17.265455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.877 [2024-12-05 10:08:17.277427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.877 [2024-12-05 10:08:17.277457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.877 [2024-12-05 10:08:17.289393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.877 [2024-12-05 10:08:17.289421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.877 [2024-12-05 10:08:17.301412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.877 [2024-12-05 10:08:17.301442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.877 [2024-12-05 10:08:17.313429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.877 [2024-12-05 10:08:17.313459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.877 [2024-12-05 10:08:17.325390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.877 [2024-12-05 10:08:17.325422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.877 [2024-12-05 10:08:17.337424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.877 [2024-12-05 10:08:17.337452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.877 [2024-12-05 10:08:17.349409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.877 [2024-12-05 10:08:17.349442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.877 [2024-12-05 10:08:17.361442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.877 [2024-12-05 10:08:17.361475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.877 [2024-12-05 10:08:17.373441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.877 [2024-12-05 10:08:17.373473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.877 [2024-12-05 10:08:17.385406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.877 [2024-12-05 10:08:17.385437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.877 [2024-12-05 10:08:17.397427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.877 [2024-12-05 10:08:17.397460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.877 [2024-12-05 10:08:17.409437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.877 [2024-12-05 10:08:17.409472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.877 Running I/O for 5 seconds... 00:36:29.877 [2024-12-05 10:08:17.430136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.877 [2024-12-05 10:08:17.430171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.877 [2024-12-05 10:08:17.450219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.877 [2024-12-05 10:08:17.450254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.877 [2024-12-05 10:08:17.467780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.877 [2024-12-05 10:08:17.467815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.877 [2024-12-05 10:08:17.480713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.877 [2024-12-05 10:08:17.480748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:29.877 [2024-12-05 10:08:17.491308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.877 [2024-12-05 10:08:17.491354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.877 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.136 [2024-12-05 10:08:17.507373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.136 [2024-12-05 10:08:17.507408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.136 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.136 [2024-12-05 10:08:17.522843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.136 [2024-12-05 10:08:17.522879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.136 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.136 [2024-12-05 10:08:17.541733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.136 [2024-12-05 10:08:17.541768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.136 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.136 [2024-12-05 10:08:17.561970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.136 [2024-12-05 10:08:17.562004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.136 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.136 [2024-12-05 10:08:17.581905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.136 [2024-12-05 10:08:17.581956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.136 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.136 [2024-12-05 10:08:17.602447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.136 [2024-12-05 10:08:17.602482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.136 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.136 [2024-12-05 10:08:17.621379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.136 [2024-12-05 10:08:17.621414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.136 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.136 [2024-12-05 10:08:17.631605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.136 [2024-12-05 10:08:17.631639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.136 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.136 [2024-12-05 10:08:17.647781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.136 [2024-12-05 10:08:17.647816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.136 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.136 [2024-12-05 10:08:17.662808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.136 [2024-12-05 10:08:17.662844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.136 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.136 [2024-12-05 10:08:17.679852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.136 [2024-12-05 10:08:17.679888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.136 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.136 [2024-12-05 10:08:17.690577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.136 [2024-12-05 10:08:17.690611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.136 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.136 [2024-12-05 10:08:17.707206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.136 [2024-12-05 10:08:17.707241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.136 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.136 [2024-12-05 10:08:17.724101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.136 [2024-12-05 10:08:17.724137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.136 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.136 [2024-12-05 10:08:17.735314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.136 [2024-12-05 10:08:17.735348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.136 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.136 [2024-12-05 10:08:17.752005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.136 [2024-12-05 10:08:17.752040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.137 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.396 [2024-12-05 10:08:17.774948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.396 [2024-12-05 10:08:17.774993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.396 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.396 [2024-12-05 10:08:17.790685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.396 [2024-12-05 10:08:17.790723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.396 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.396 [2024-12-05 10:08:17.809588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.396 [2024-12-05 10:08:17.809623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.396 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.396 [2024-12-05 10:08:17.820204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.396 [2024-12-05 10:08:17.820239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.396 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.396 [2024-12-05 10:08:17.832759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.396 [2024-12-05 10:08:17.832794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.396 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.396 [2024-12-05 10:08:17.853469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.396 [2024-12-05 10:08:17.853503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.396 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.396 [2024-12-05 10:08:17.864626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.396 [2024-12-05 10:08:17.864661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.396 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.396 [2024-12-05 10:08:17.877443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.396 [2024-12-05 10:08:17.877487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.396 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.396 [2024-12-05 10:08:17.888027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.396 [2024-12-05 10:08:17.888064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.396 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.396 [2024-12-05 10:08:17.902225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.396 [2024-12-05 10:08:17.902263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.396 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.396 [2024-12-05 10:08:17.921783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.396 [2024-12-05 10:08:17.921821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.396 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.396 [2024-12-05 10:08:17.943069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.396 [2024-12-05 10:08:17.943106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.396 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.396 [2024-12-05 10:08:17.958721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.396 [2024-12-05 10:08:17.958759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.396 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.397 [2024-12-05 10:08:17.974655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.397 [2024-12-05 10:08:17.974693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.397 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.397 [2024-12-05 10:08:17.991312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.397 [2024-12-05 10:08:17.991350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.397 2024/12/05 10:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.397 [2024-12-05 10:08:18.008209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.397 [2024-12-05 10:08:18.008255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.397 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.397 [2024-12-05 10:08:18.018628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.397 [2024-12-05 10:08:18.018668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.397 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.657 [2024-12-05 10:08:18.033729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.657 [2024-12-05 10:08:18.033775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.657 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.657 [2024-12-05 10:08:18.044905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.657 [2024-12-05 10:08:18.044955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.657 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.657 [2024-12-05 10:08:18.058823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.657 [2024-12-05 10:08:18.058861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.657 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.657 [2024-12-05 10:08:18.077599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.657 [2024-12-05 10:08:18.077646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.657 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.657 [2024-12-05 10:08:18.088386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.657 [2024-12-05 10:08:18.088424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.657 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.657 [2024-12-05 10:08:18.101154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.657 [2024-12-05 10:08:18.101204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.657 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.657 [2024-12-05 10:08:18.112192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.657 [2024-12-05 10:08:18.112230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.657 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.657 [2024-12-05 10:08:18.126250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.657 [2024-12-05 10:08:18.126298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.657 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.657 [2024-12-05 10:08:18.146587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.657 [2024-12-05 10:08:18.146625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.657 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.657 [2024-12-05 10:08:18.162340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.657 [2024-12-05 10:08:18.162378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.657 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.657 [2024-12-05 10:08:18.182581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.657 [2024-12-05 10:08:18.182621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.657 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.657 [2024-12-05 10:08:18.200298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.657 [2024-12-05 10:08:18.200337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.657 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.657 [2024-12-05 10:08:18.211482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.657 [2024-12-05 10:08:18.211520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.657 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.657 [2024-12-05 10:08:18.228633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.657 [2024-12-05 10:08:18.228671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.657 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.657 [2024-12-05 10:08:18.251130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.657 [2024-12-05 10:08:18.251167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.657 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.657 [2024-12-05 10:08:18.266014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.657 [2024-12-05 10:08:18.266052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.657 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.917 [2024-12-05 10:08:18.287074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.917 [2024-12-05 10:08:18.287112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.917 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.917 [2024-12-05 10:08:18.303128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.917 [2024-12-05 10:08:18.303165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.917 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.917 [2024-12-05 10:08:18.321415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.917 [2024-12-05 10:08:18.321452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.917 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.917 [2024-12-05 10:08:18.331819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.917 [2024-12-05 10:08:18.331868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.917 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.917 [2024-12-05 10:08:18.348273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.917 [2024-12-05 10:08:18.348322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.917 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.917 [2024-12-05 10:08:18.358996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.917 [2024-12-05 10:08:18.359037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.917 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.917 [2024-12-05 10:08:18.375511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.917 [2024-12-05 10:08:18.375548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.917 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.917 [2024-12-05 10:08:18.391075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.917 [2024-12-05 10:08:18.391113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.917 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.917 [2024-12-05 10:08:18.408622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.917 [2024-12-05 10:08:18.408660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.917 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.917 [2024-12-05 10:08:18.418720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.917 [2024-12-05 10:08:18.418757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.917 10795.00 IOPS, 84.34 MiB/s [2024-12-05T10:08:18.546Z] 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.917 [2024-12-05 10:08:18.435584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.917 [2024-12-05 10:08:18.435621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.917 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.917 [2024-12-05 10:08:18.450880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.917 [2024-12-05 10:08:18.450919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.917 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.917 [2024-12-05 10:08:18.467827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.917 [2024-12-05 10:08:18.467865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.917 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.917 [2024-12-05 10:08:18.483168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.917 [2024-12-05 10:08:18.483205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.917 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.917 [2024-12-05 10:08:18.500191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.917 [2024-12-05 10:08:18.500228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.917 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.917 [2024-12-05 10:08:18.510700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.917 [2024-12-05 10:08:18.510752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.917 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.917 [2024-12-05 10:08:18.527030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.917 [2024-12-05 10:08:18.527068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.917 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.177 [2024-12-05 10:08:18.545898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.177 [2024-12-05 10:08:18.545947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.177 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.177 [2024-12-05 10:08:18.565549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.177 [2024-12-05 10:08:18.565588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.177 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.177 [2024-12-05 10:08:18.576353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.177 [2024-12-05 10:08:18.576402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.177 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.177 [2024-12-05 10:08:18.592580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.177 [2024-12-05 10:08:18.592630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.177 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.177 [2024-12-05 10:08:18.603533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.177 [2024-12-05 10:08:18.603570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.177 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.177 [2024-12-05 10:08:18.618852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.177 [2024-12-05 10:08:18.618890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.177 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.177 [2024-12-05 10:08:18.635351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.177 [2024-12-05 10:08:18.635389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.177 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.177 [2024-12-05 10:08:18.651667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.177 [2024-12-05 10:08:18.651705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.177 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.177 [2024-12-05 10:08:18.666804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.177 [2024-12-05 10:08:18.666843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.177 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.177 [2024-12-05 10:08:18.685437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.177 [2024-12-05 10:08:18.685475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.177 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.177 [2024-12-05 10:08:18.696379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.177 [2024-12-05 10:08:18.696417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.177 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.177 [2024-12-05 10:08:18.708916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.177 [2024-12-05 10:08:18.708965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.177 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.177 [2024-12-05 10:08:18.720220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.177 [2024-12-05 10:08:18.720257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.177 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.177 [2024-12-05 10:08:18.733543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.177 [2024-12-05 10:08:18.733581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.177 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.177 [2024-12-05 10:08:18.753805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.177 [2024-12-05 10:08:18.753843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.177 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.177 [2024-12-05 10:08:18.772217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.177 [2024-12-05 10:08:18.772255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.177 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.177 [2024-12-05 10:08:18.782350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.177 [2024-12-05 10:08:18.782386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.177 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.177 [2024-12-05 10:08:18.798771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.177 [2024-12-05 10:08:18.798808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.178 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.437 [2024-12-05 10:08:18.813989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.437 [2024-12-05 10:08:18.814027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.437 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.437 [2024-12-05 10:08:18.834387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.437 [2024-12-05 10:08:18.834437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.437 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.437 [2024-12-05 10:08:18.851364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.437 [2024-12-05 10:08:18.851402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.437 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.437 [2024-12-05 10:08:18.867410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.437 [2024-12-05 10:08:18.867448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.437 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.437 [2024-12-05 10:08:18.883676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.437 [2024-12-05 10:08:18.883714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.437 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.437 [2024-12-05 10:08:18.899067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.437 [2024-12-05 10:08:18.899105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.437 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.437 [2024-12-05 10:08:18.917489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.437 [2024-12-05 10:08:18.917527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.437 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.437 [2024-12-05 10:08:18.928571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.437 [2024-12-05 10:08:18.928609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.437 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.437 [2024-12-05 10:08:18.939684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.437 [2024-12-05 10:08:18.939722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.437 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.437 [2024-12-05 10:08:18.953647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.437 [2024-12-05 10:08:18.953684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.437 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.437 [2024-12-05 10:08:18.974013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.437 [2024-12-05 10:08:18.974051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.437 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.437 [2024-12-05 10:08:18.993002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.437 [2024-12-05 10:08:18.993039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.437 2024/12/05 10:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.437 [2024-12-05 10:08:19.014390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.437 [2024-12-05 10:08:19.014439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.437 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.437 [2024-12-05 10:08:19.033675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.437 [2024-12-05 10:08:19.033713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.438 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.438 [2024-12-05 10:08:19.043926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.438 [2024-12-05 10:08:19.043976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.438 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.438 [2024-12-05 10:08:19.058429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.438 [2024-12-05 10:08:19.058475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.438 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.697 [2024-12-05 10:08:19.075155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.697 [2024-12-05 10:08:19.075192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.697 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.697 [2024-12-05 10:08:19.091731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.697 [2024-12-05 10:08:19.091782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.697 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.697 [2024-12-05 10:08:19.106985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.697 [2024-12-05 10:08:19.107019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.697 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.697 [2024-12-05 10:08:19.125466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.697 [2024-12-05 10:08:19.125504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.697 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.697 [2024-12-05 10:08:19.135733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.697 [2024-12-05 10:08:19.135770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.697 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.697 [2024-12-05 10:08:19.151542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.697 [2024-12-05 10:08:19.151580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.697 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.697 [2024-12-05 10:08:19.168263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.697 [2024-12-05 10:08:19.168302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.697 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.697 [2024-12-05 10:08:19.178303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.697 [2024-12-05 10:08:19.178341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.697 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.697 [2024-12-05 10:08:19.194734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.697 [2024-12-05 10:08:19.194772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.697 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.697 [2024-12-05 10:08:19.212459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.697 [2024-12-05 10:08:19.212497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.697 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.698 [2024-12-05 10:08:19.234943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.698 [2024-12-05 10:08:19.235006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.698 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.698 [2024-12-05 10:08:19.251733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.698 [2024-12-05 10:08:19.251771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.698 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.698 [2024-12-05 10:08:19.266985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.698 [2024-12-05 10:08:19.267022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.698 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.698 [2024-12-05 10:08:19.283348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.698 [2024-12-05 10:08:19.283387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.698 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.698 [2024-12-05 10:08:19.299553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.698 [2024-12-05 10:08:19.299592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.698 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.698 [2024-12-05 10:08:19.314772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.698 [2024-12-05 10:08:19.314810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.698 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.958 [2024-12-05 10:08:19.332125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.958 [2024-12-05 10:08:19.332163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.958 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.958 [2024-12-05 10:08:19.342476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.958 [2024-12-05 10:08:19.342515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.958 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.958 [2024-12-05 10:08:19.359905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.958 [2024-12-05 10:08:19.359954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.958 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.958 [2024-12-05 10:08:19.372556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.958 [2024-12-05 10:08:19.372594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.958 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.958 [2024-12-05 10:08:19.383024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.958 [2024-12-05 10:08:19.383061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.958 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.958 [2024-12-05 10:08:19.398960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.958 [2024-12-05 10:08:19.399023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.958 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.958 [2024-12-05 10:08:19.415590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.958 [2024-12-05 10:08:19.415640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.958 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.958 10857.00 IOPS, 84.82 MiB/s [2024-12-05T10:08:19.587Z] [2024-12-05 10:08:19.426641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.958 [2024-12-05 10:08:19.426683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.958 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.958 [2024-12-05 10:08:19.442047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.958 [2024-12-05 10:08:19.442086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.958 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.958 [2024-12-05 10:08:19.459478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.958 [2024-12-05 10:08:19.459516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.958 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.958 [2024-12-05 10:08:19.472815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.958 [2024-12-05 10:08:19.472853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.959 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.959 [2024-12-05 10:08:19.483729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.959 [2024-12-05 10:08:19.483768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.959 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.959 [2024-12-05 10:08:19.499438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.959 [2024-12-05 10:08:19.499476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.959 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.959 [2024-12-05 10:08:19.516012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.959 [2024-12-05 10:08:19.516050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.959 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.959 [2024-12-05 10:08:19.526466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.959 [2024-12-05 10:08:19.526503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.959 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.959 [2024-12-05 10:08:19.543198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.959 [2024-12-05 10:08:19.543235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.959 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.959 [2024-12-05 10:08:19.559763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.959 [2024-12-05 10:08:19.559801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.959 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.959 [2024-12-05 10:08:19.574677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.959 [2024-12-05 10:08:19.574728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.959 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.219 [2024-12-05 10:08:19.592269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.219 [2024-12-05 10:08:19.592307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.219 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.219 [2024-12-05 10:08:19.602391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.219 [2024-12-05 10:08:19.602471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.219 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.219 [2024-12-05 10:08:19.619482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.219 [2024-12-05 10:08:19.619520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.219 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.219 [2024-12-05 10:08:19.633792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.219 [2024-12-05 10:08:19.633829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.219 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.219 [2024-12-05 10:08:19.653431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.219 [2024-12-05 10:08:19.653470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.219 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.219 [2024-12-05 10:08:19.663676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.219 [2024-12-05 10:08:19.663724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.219 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.219 [2024-12-05 10:08:19.680145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.219 [2024-12-05 10:08:19.680192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.219 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.219 [2024-12-05 10:08:19.690394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.219 [2024-12-05 10:08:19.690441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.219 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.219 [2024-12-05 10:08:19.706273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.219 [2024-12-05 10:08:19.706310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.219 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.219 [2024-12-05 10:08:19.726206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.219 [2024-12-05 10:08:19.726244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.219 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.219 [2024-12-05 10:08:19.745415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.219 [2024-12-05 10:08:19.745465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.219 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.219 [2024-12-05 10:08:19.756940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.219 [2024-12-05 10:08:19.756984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.219 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.219 [2024-12-05 10:08:19.771459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.219 [2024-12-05 10:08:19.771498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.219 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.219 [2024-12-05 10:08:19.786871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.219 [2024-12-05 10:08:19.786909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.219 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.219 [2024-12-05 10:08:19.803861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.219 [2024-12-05 10:08:19.803908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.219 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.219 [2024-12-05 10:08:19.814657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.219 [2024-12-05 10:08:19.814694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.219 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.219 [2024-12-05 10:08:19.831401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.219 [2024-12-05 10:08:19.831440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.219 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.479 [2024-12-05 10:08:19.848852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.479 [2024-12-05 10:08:19.848890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.479 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.479 [2024-12-05 10:08:19.870464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.479 [2024-12-05 10:08:19.870503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.479 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.479 [2024-12-05 10:08:19.887686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.479 [2024-12-05 10:08:19.887724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.479 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.479 [2024-12-05 10:08:19.902862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.479 [2024-12-05 10:08:19.902900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.479 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.479 [2024-12-05 10:08:19.921875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.479 [2024-12-05 10:08:19.921913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.479 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.479 [2024-12-05 10:08:19.942335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.479 [2024-12-05 10:08:19.942373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.479 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.479 [2024-12-05 10:08:19.961430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.479 [2024-12-05 10:08:19.961468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.479 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.479 [2024-12-05 10:08:19.971728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.479 [2024-12-05 10:08:19.971766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.479 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.479 [2024-12-05 10:08:19.987705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.479 [2024-12-05 10:08:19.987743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.479 2024/12/05 10:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.479 [2024-12-05 10:08:20.004726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.479 [2024-12-05 10:08:20.004767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.479 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.479 [2024-12-05 10:08:20.028367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.479 [2024-12-05 10:08:20.028405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.480 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.480 [2024-12-05 10:08:20.043645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.480 [2024-12-05 10:08:20.043695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.480 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.480 [2024-12-05 10:08:20.065540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.480 [2024-12-05 10:08:20.065594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.480 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.480 [2024-12-05 10:08:20.081808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.480 [2024-12-05 10:08:20.081848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.480 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.480 [2024-12-05 10:08:20.098453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.480 [2024-12-05 10:08:20.098493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.480 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.740 [2024-12-05 10:08:20.116062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.740 [2024-12-05 10:08:20.116100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.740 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.740 [2024-12-05 10:08:20.127139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.740 [2024-12-05 10:08:20.127177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.740 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.740 [2024-12-05 10:08:20.143510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.740 [2024-12-05 10:08:20.143547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.740 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.740 [2024-12-05 10:08:20.160669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.740 [2024-12-05 10:08:20.160708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.740 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.740 [2024-12-05 10:08:20.170608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.740 [2024-12-05 10:08:20.170646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.740 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.740 [2024-12-05 10:08:20.187149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.740 [2024-12-05 10:08:20.187187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.740 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.740 [2024-12-05 10:08:20.203924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.740 [2024-12-05 10:08:20.203974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.740 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.740 [2024-12-05 10:08:20.219396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.740 [2024-12-05 10:08:20.219434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.740 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.740 [2024-12-05 10:08:20.235588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.740 [2024-12-05 10:08:20.235625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.740 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.740 [2024-12-05 10:08:20.251421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.740 [2024-12-05 10:08:20.251460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.740 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.740 [2024-12-05 10:08:20.266091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.740 [2024-12-05 10:08:20.266130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.740 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.740 [2024-12-05 10:08:20.285375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.740 [2024-12-05 10:08:20.285412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.740 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.740 [2024-12-05 10:08:20.295629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.740 [2024-12-05 10:08:20.295676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.741 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.741 [2024-12-05 10:08:20.310838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.741 [2024-12-05 10:08:20.310876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.741 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.741 [2024-12-05 10:08:20.327787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.741 [2024-12-05 10:08:20.327825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.741 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.741 [2024-12-05 10:08:20.343088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.741 [2024-12-05 10:08:20.343126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.741 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.741 [2024-12-05 10:08:20.361394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.741 [2024-12-05 10:08:20.361432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.741 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.001 [2024-12-05 10:08:20.372752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.001 [2024-12-05 10:08:20.372791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.001 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.001 [2024-12-05 10:08:20.387882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.001 [2024-12-05 10:08:20.387920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.001 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.001 [2024-12-05 10:08:20.398529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.001 [2024-12-05 10:08:20.398566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.001 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.001 [2024-12-05 10:08:20.414506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.001 [2024-12-05 10:08:20.414544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.001 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.001 10786.33 IOPS, 84.27 MiB/s [2024-12-05T10:08:20.630Z] [2024-12-05 10:08:20.434361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.001 [2024-12-05 10:08:20.434407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.001 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.001 [2024-12-05 10:08:20.454364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.001 [2024-12-05 10:08:20.454410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.001 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.001 [2024-12-05 10:08:20.474328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.001 [2024-12-05 10:08:20.474366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.001 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.001 [2024-12-05 10:08:20.491902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.001 [2024-12-05 10:08:20.491954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.001 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.001 [2024-12-05 10:08:20.507021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.001 [2024-12-05 10:08:20.507059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.001 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.001 [2024-12-05 10:08:20.523112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.001 [2024-12-05 10:08:20.523150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.001 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.001 [2024-12-05 10:08:20.541525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.001 [2024-12-05 10:08:20.541563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.001 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.001 [2024-12-05 10:08:20.552272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.001 [2024-12-05 10:08:20.552310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.001 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.001 [2024-12-05 10:08:20.568286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.001 [2024-12-05 10:08:20.568324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.001 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.001 [2024-12-05 10:08:20.580610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.001 [2024-12-05 10:08:20.580649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.001 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.001 [2024-12-05 10:08:20.590339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.001 [2024-12-05 10:08:20.590375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.001 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.001 [2024-12-05 10:08:20.606665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.001 [2024-12-05 10:08:20.606704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.001 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.001 [2024-12-05 10:08:20.626447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.001 [2024-12-05 10:08:20.626488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.261 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.261 [2024-12-05 10:08:20.643384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.261 [2024-12-05 10:08:20.643422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.261 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.261 [2024-12-05 10:08:20.659926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.261 [2024-12-05 10:08:20.659975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.261 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.261 [2024-12-05 10:08:20.675525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.261 [2024-12-05 10:08:20.675562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.261 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.261 [2024-12-05 10:08:20.691795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.261 [2024-12-05 10:08:20.691833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.261 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.261 [2024-12-05 10:08:20.707961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.261 [2024-12-05 10:08:20.707998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.261 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.261 [2024-12-05 10:08:20.722478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.261 [2024-12-05 10:08:20.722518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.261 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.261 [2024-12-05 10:08:20.740997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.261 [2024-12-05 10:08:20.741032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.261 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.261 [2024-12-05 10:08:20.762783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.261 [2024-12-05 10:08:20.762836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.261 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.261 [2024-12-05 10:08:20.778800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.261 [2024-12-05 10:08:20.778839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.261 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.261 [2024-12-05 10:08:20.797536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.261 [2024-12-05 10:08:20.797574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.261 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.261 [2024-12-05 10:08:20.808082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.261 [2024-12-05 10:08:20.808118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.261 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.261 [2024-12-05 10:08:20.824200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.261 [2024-12-05 10:08:20.824238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.261 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.261 [2024-12-05 10:08:20.835071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.261 [2024-12-05 10:08:20.835110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.261 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.261 [2024-12-05 10:08:20.851728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.261 [2024-12-05 10:08:20.851766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.261 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.262 [2024-12-05 10:08:20.862346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.262 [2024-12-05 10:08:20.862383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.262 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.262 [2024-12-05 10:08:20.879560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.262 [2024-12-05 10:08:20.879597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.262 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.521 [2024-12-05 10:08:20.890661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.521 [2024-12-05 10:08:20.890702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.521 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.521 [2024-12-05 10:08:20.908201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.521 [2024-12-05 10:08:20.908239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.521 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.521 [2024-12-05 10:08:20.918669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.521 [2024-12-05 10:08:20.918708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.521 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.521 [2024-12-05 10:08:20.935292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.521 [2024-12-05 10:08:20.935330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.521 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.521 [2024-12-05 10:08:20.951301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.521 [2024-12-05 10:08:20.951339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.521 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.521 [2024-12-05 10:08:20.969889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.521 [2024-12-05 10:08:20.969953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.521 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.521 [2024-12-05 10:08:20.990570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.521 [2024-12-05 10:08:20.990610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.521 2024/12/05 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.521 [2024-12-05 10:08:21.006931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.521 [2024-12-05 10:08:21.006982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.521 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.521 [2024-12-05 10:08:21.025294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.521 [2024-12-05 10:08:21.025331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.521 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.521 [2024-12-05 10:08:21.035719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.521 [2024-12-05 10:08:21.035757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.521 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.521 [2024-12-05 10:08:21.050062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.521 [2024-12-05 10:08:21.050101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.521 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.521 [2024-12-05 10:08:21.070385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.521 [2024-12-05 10:08:21.070449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.521 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.521 [2024-12-05 10:08:21.089336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.521 [2024-12-05 10:08:21.089374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.521 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.521 [2024-12-05 10:08:21.099480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.521 [2024-12-05 10:08:21.099518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.521 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.521 [2024-12-05 10:08:21.116442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.521 [2024-12-05 10:08:21.116492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.521 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.521 [2024-12-05 10:08:21.127326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.521 [2024-12-05 10:08:21.127364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.522 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.522 [2024-12-05 10:08:21.141553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.522 [2024-12-05 10:08:21.141590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.522 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.781 [2024-12-05 10:08:21.153588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.781 [2024-12-05 10:08:21.153625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.781 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.781 [2024-12-05 10:08:21.166264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.781 [2024-12-05 10:08:21.166302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.781 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.781 [2024-12-05 10:08:21.181228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.781 [2024-12-05 10:08:21.181273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.781 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.781 [2024-12-05 10:08:21.191018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.781 [2024-12-05 10:08:21.191055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.781 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.781 [2024-12-05 10:08:21.207103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.781 [2024-12-05 10:08:21.207140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.781 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.781 [2024-12-05 10:08:21.223420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.781 [2024-12-05 10:08:21.223457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.781 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.781 [2024-12-05 10:08:21.236691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.781 [2024-12-05 10:08:21.236730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.781 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.781 [2024-12-05 10:08:21.247099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.781 [2024-12-05 10:08:21.247138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.781 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.781 [2024-12-05 10:08:21.264251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.781 [2024-12-05 10:08:21.264294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.781 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.781 [2024-12-05 10:08:21.275042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.781 [2024-12-05 10:08:21.275079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.781 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.781 [2024-12-05 10:08:21.291798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.781 [2024-12-05 10:08:21.291836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.781 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.781 [2024-12-05 10:08:21.302487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.781 [2024-12-05 10:08:21.302525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.781 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.781 [2024-12-05 10:08:21.318912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.781 [2024-12-05 10:08:21.318960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.781 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.781 [2024-12-05 10:08:21.338196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.781 [2024-12-05 10:08:21.338234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.781 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.781 [2024-12-05 10:08:21.355474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.781 [2024-12-05 10:08:21.355512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.781 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.781 [2024-12-05 10:08:21.371036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.781 [2024-12-05 10:08:21.371074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.781 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.781 [2024-12-05 10:08:21.389638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.781 [2024-12-05 10:08:21.389677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.781 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.781 [2024-12-05 10:08:21.400882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.781 [2024-12-05 10:08:21.400920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.781 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.041 [2024-12-05 10:08:21.415086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.041 [2024-12-05 10:08:21.415123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.041 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.041 10827.00 IOPS, 84.59 MiB/s [2024-12-05T10:08:21.670Z] [2024-12-05 10:08:21.431275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.041 [2024-12-05 10:08:21.431313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.041 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.041 [2024-12-05 10:08:21.447649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.041 [2024-12-05 10:08:21.447687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.041 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.041 [2024-12-05 10:08:21.464315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.041 [2024-12-05 10:08:21.464354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.041 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.041 [2024-12-05 10:08:21.479064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.041 [2024-12-05 10:08:21.479102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.041 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.041 [2024-12-05 10:08:21.494987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.041 [2024-12-05 10:08:21.495025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.041 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.041 [2024-12-05 10:08:21.513471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.041 [2024-12-05 10:08:21.513508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.041 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.041 [2024-12-05 10:08:21.523390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.041 [2024-12-05 10:08:21.523428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.041 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.041 [2024-12-05 10:08:21.539126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.041 [2024-12-05 10:08:21.539163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.041 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.041 [2024-12-05 10:08:21.555770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.041 [2024-12-05 10:08:21.555809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.041 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.041 [2024-12-05 10:08:21.569820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.041 [2024-12-05 10:08:21.569858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.041 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.041 [2024-12-05 10:08:21.590485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.041 [2024-12-05 10:08:21.590524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.041 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.041 [2024-12-05 10:08:21.609443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.041 [2024-12-05 10:08:21.609480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.041 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.041 [2024-12-05 10:08:21.619421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.041 [2024-12-05 10:08:21.619458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.041 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.041 [2024-12-05 10:08:21.635005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.041 [2024-12-05 10:08:21.635042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.042 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.042 [2024-12-05 10:08:21.653495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.042 [2024-12-05 10:08:21.653533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.042 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.042 [2024-12-05 10:08:21.664042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.042 [2024-12-05 10:08:21.664079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.042 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.301 [2024-12-05 10:08:21.677560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.301 [2024-12-05 10:08:21.677598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.301 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.301 [2024-12-05 10:08:21.688746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.301 [2024-12-05 10:08:21.688785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.301 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.301 [2024-12-05 10:08:21.699705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.301 [2024-12-05 10:08:21.699743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.301 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.301 [2024-12-05 10:08:21.714136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.301 [2024-12-05 10:08:21.714174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.301 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.301 [2024-12-05 10:08:21.734419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.301 [2024-12-05 10:08:21.734458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.301 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.301 [2024-12-05 10:08:21.753592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.301 [2024-12-05 10:08:21.753631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.301 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.301 [2024-12-05 10:08:21.764202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.301 [2024-12-05 10:08:21.764240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.301 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.301 [2024-12-05 10:08:21.776487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.301 [2024-12-05 10:08:21.776525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.301 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.301 [2024-12-05 10:08:21.787941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.301 [2024-12-05 10:08:21.787977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.302 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.302 [2024-12-05 10:08:21.801351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.302 [2024-12-05 10:08:21.801387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.302 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.302 [2024-12-05 10:08:21.811225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.302 [2024-12-05 10:08:21.811263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.302 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.302 [2024-12-05 10:08:21.827991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.302 [2024-12-05 10:08:21.828028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.302 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.302 [2024-12-05 10:08:21.843150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.302 [2024-12-05 10:08:21.843188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.302 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.302 [2024-12-05 10:08:21.861524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.302 [2024-12-05 10:08:21.861563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.302 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.302 [2024-12-05 10:08:21.871795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.302 [2024-12-05 10:08:21.871833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.302 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.302 [2024-12-05 10:08:21.888062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.302 [2024-12-05 10:08:21.888098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.302 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.302 [2024-12-05 10:08:21.901544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.302 [2024-12-05 10:08:21.901582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.302 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.302 [2024-12-05 10:08:21.922955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.302 [2024-12-05 10:08:21.922992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.302 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.561 [2024-12-05 10:08:21.937099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.561 [2024-12-05 10:08:21.937136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.561 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.561 [2024-12-05 10:08:21.959503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.561 [2024-12-05 10:08:21.959541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.561 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.561 [2024-12-05 10:08:21.974484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.561 [2024-12-05 10:08:21.974532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.561 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.561 [2024-12-05 10:08:21.994216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.561 [2024-12-05 10:08:21.994255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.561 2024/12/05 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.561 [2024-12-05 10:08:22.011486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.561 [2024-12-05 10:08:22.011524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.561 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.561 [2024-12-05 10:08:22.026879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.561 [2024-12-05 10:08:22.026917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.562 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.562 [2024-12-05 10:08:22.045631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.562 [2024-12-05 10:08:22.045670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.562 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.562 [2024-12-05 10:08:22.055968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.562 [2024-12-05 10:08:22.056004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.562 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.562 [2024-12-05 10:08:22.070366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.562 [2024-12-05 10:08:22.070413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.562 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.562 [2024-12-05 10:08:22.089870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.562 [2024-12-05 10:08:22.089908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.562 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.562 [2024-12-05 10:08:22.107016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.562 [2024-12-05 10:08:22.107065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.562 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.562 [2024-12-05 10:08:22.123698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.562 [2024-12-05 10:08:22.123737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.562 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.562 [2024-12-05 10:08:22.139109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.562 [2024-12-05 10:08:22.139147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.562 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.562 [2024-12-05 10:08:22.155097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.562 [2024-12-05 10:08:22.155147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.562 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.562 [2024-12-05 10:08:22.171842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.562 [2024-12-05 10:08:22.171880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.562 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.562 [2024-12-05 10:08:22.184685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.562 [2024-12-05 10:08:22.184741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.562 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.821 [2024-12-05 10:08:22.196077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.821 [2024-12-05 10:08:22.196115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.822 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.822 [2024-12-05 10:08:22.212924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.822 [2024-12-05 10:08:22.212975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.822 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.822 [2024-12-05 10:08:22.223556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.822 [2024-12-05 10:08:22.223593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.822 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.822 [2024-12-05 10:08:22.239543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.822 [2024-12-05 10:08:22.239582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.822 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.822 [2024-12-05 10:08:22.255576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.822 [2024-12-05 10:08:22.255614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.822 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.822 [2024-12-05 10:08:22.272193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.822 [2024-12-05 10:08:22.272232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.822 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.822 [2024-12-05 10:08:22.283431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.822 [2024-12-05 10:08:22.283479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.822 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.822 [2024-12-05 10:08:22.302352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.822 [2024-12-05 10:08:22.302390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.822 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.822 [2024-12-05 10:08:22.321452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.822 [2024-12-05 10:08:22.321488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.822 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.822 [2024-12-05 10:08:22.331495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.822 [2024-12-05 10:08:22.331532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.822 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.822 [2024-12-05 10:08:22.347490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.822 [2024-12-05 10:08:22.347527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.822 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.822 [2024-12-05 10:08:22.370377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.822 [2024-12-05 10:08:22.370423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.822 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.822 [2024-12-05 10:08:22.386375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.822 [2024-12-05 10:08:22.386432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.822 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.822 [2024-12-05 10:08:22.406048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.822 [2024-12-05 10:08:22.406084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.822 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.822 10872.60 IOPS, 84.94 MiB/s [2024-12-05T10:08:22.451Z] [2024-12-05 10:08:22.425172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.822 [2024-12-05 10:08:22.425209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.822 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.822 00:36:34.822 Latency(us) 00:36:34.822 [2024-12-05T10:08:22.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:34.822 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:36:34.822 Nvme1n1 : 5.01 10876.34 84.97 0.00 0.00 11753.01 2978.91 25499.46 00:36:34.822 [2024-12-05T10:08:22.451Z] =================================================================================================================== 00:36:34.822 [2024-12-05T10:08:22.451Z] Total : 10876.34 84.97 0.00 0.00 11753.01 2978.91 25499.46 00:36:34.822 [2024-12-05 10:08:22.433407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.822 [2024-12-05 10:08:22.433442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.822 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.822 [2024-12-05 10:08:22.445493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.822 [2024-12-05 10:08:22.445543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.082 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.082 [2024-12-05 10:08:22.457415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.082 [2024-12-05 10:08:22.457458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.082 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.082 [2024-12-05 10:08:22.469390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.082 [2024-12-05 10:08:22.469419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.082 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.082 [2024-12-05 10:08:22.481413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.082 [2024-12-05 10:08:22.481444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.082 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.082 [2024-12-05 10:08:22.493430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.082 [2024-12-05 10:08:22.493471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.082 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.082 [2024-12-05 10:08:22.505390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.082 [2024-12-05 10:08:22.505419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.082 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.082 [2024-12-05 10:08:22.517413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.082 [2024-12-05 10:08:22.517453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.082 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.082 [2024-12-05 10:08:22.529398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.082 [2024-12-05 10:08:22.529426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.082 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.082 [2024-12-05 10:08:22.541416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.082 [2024-12-05 10:08:22.541445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.082 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.082 [2024-12-05 10:08:22.553425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.082 [2024-12-05 10:08:22.553456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.082 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.082 [2024-12-05 10:08:22.565401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.082 [2024-12-05 10:08:22.565431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.082 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.082 [2024-12-05 10:08:22.577422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.082 [2024-12-05 10:08:22.577452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.082 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.082 [2024-12-05 10:08:22.589418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.083 [2024-12-05 10:08:22.589448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.083 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.083 [2024-12-05 10:08:22.601388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.083 [2024-12-05 10:08:22.601416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.083 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.083 [2024-12-05 10:08:22.613408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.083 [2024-12-05 10:08:22.613437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.083 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.083 [2024-12-05 10:08:22.625386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.083 [2024-12-05 10:08:22.625426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.083 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.083 [2024-12-05 10:08:22.637438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.083 [2024-12-05 10:08:22.637478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.083 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.083 [2024-12-05 10:08:22.649411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.083 [2024-12-05 10:08:22.649449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.083 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.083 [2024-12-05 10:08:22.661389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.083 [2024-12-05 10:08:22.661417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.083 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.083 [2024-12-05 10:08:22.673416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.083 [2024-12-05 10:08:22.673449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.083 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.083 [2024-12-05 10:08:22.685419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.083 [2024-12-05 10:08:22.685447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.083 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.083 [2024-12-05 10:08:22.697391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.083 [2024-12-05 10:08:22.697419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.083 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.083 [2024-12-05 10:08:22.709411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.083 [2024-12-05 10:08:22.709457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.343 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.343 [2024-12-05 10:08:22.721412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.343 [2024-12-05 10:08:22.721449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.343 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.343 [2024-12-05 10:08:22.733410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.343 [2024-12-05 10:08:22.733438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.343 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.343 [2024-12-05 10:08:22.745418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.343 [2024-12-05 10:08:22.745452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.343 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.343 [2024-12-05 10:08:22.757389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.343 [2024-12-05 10:08:22.757418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.343 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.343 [2024-12-05 10:08:22.769425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.343 [2024-12-05 10:08:22.769456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.343 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.343 [2024-12-05 10:08:22.781450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.343 [2024-12-05 10:08:22.781480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.343 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.343 [2024-12-05 10:08:22.793390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.343 [2024-12-05 10:08:22.793419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.343 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.343 [2024-12-05 10:08:22.805415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.343 [2024-12-05 10:08:22.805444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.343 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.343 [2024-12-05 10:08:22.817389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.343 [2024-12-05 10:08:22.817416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.343 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.343 [2024-12-05 10:08:22.829411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.343 [2024-12-05 10:08:22.829442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.343 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.343 [2024-12-05 10:08:22.841415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.343 [2024-12-05 10:08:22.841445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.343 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.343 [2024-12-05 10:08:22.853391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.343 [2024-12-05 10:08:22.853418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.343 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.343 [2024-12-05 10:08:22.865415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.343 [2024-12-05 10:08:22.865444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.343 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.343 [2024-12-05 10:08:22.877425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.343 [2024-12-05 10:08:22.877454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.343 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.343 [2024-12-05 10:08:22.889389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.343 [2024-12-05 10:08:22.889418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.343 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.344 [2024-12-05 10:08:22.901417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.344 [2024-12-05 10:08:22.901447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.344 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.344 [2024-12-05 10:08:22.913389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.344 [2024-12-05 10:08:22.913416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.344 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.344 [2024-12-05 10:08:22.925431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.344 [2024-12-05 10:08:22.925460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.344 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.344 [2024-12-05 10:08:22.937413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.344 [2024-12-05 10:08:22.937442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.344 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.344 [2024-12-05 10:08:22.949388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.344 [2024-12-05 10:08:22.949416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.344 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.344 [2024-12-05 10:08:22.961417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.344 [2024-12-05 10:08:22.961452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.344 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.604 [2024-12-05 10:08:22.973412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.604 [2024-12-05 10:08:22.973441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.604 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.604 [2024-12-05 10:08:22.985396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.604 [2024-12-05 10:08:22.985424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.604 2024/12/05 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.604 [2024-12-05 10:08:22.997418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.604 [2024-12-05 10:08:22.997452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.604 2024/12/05 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.604 [2024-12-05 10:08:23.009388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.604 [2024-12-05 10:08:23.009416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.604 2024/12/05 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.604 [2024-12-05 10:08:23.021446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.604 [2024-12-05 10:08:23.021475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.604 2024/12/05 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.604 [2024-12-05 10:08:23.033422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.604 [2024-12-05 10:08:23.033451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.604 2024/12/05 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.604 [2024-12-05 10:08:23.045393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.604 [2024-12-05 10:08:23.045422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.604 2024/12/05 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.604 [2024-12-05 10:08:23.057408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.604 [2024-12-05 10:08:23.057439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.604 2024/12/05 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.604 [2024-12-05 10:08:23.069439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.604 [2024-12-05 10:08:23.069468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.604 2024/12/05 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.604 [2024-12-05 10:08:23.081389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.604 [2024-12-05 10:08:23.081417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.604 2024/12/05 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.604 [2024-12-05 10:08:23.093412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.604 [2024-12-05 10:08:23.093441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.604 2024/12/05 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.604 [2024-12-05 10:08:23.105392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.604 [2024-12-05 10:08:23.105420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.604 2024/12/05 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.604 [2024-12-05 10:08:23.117431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.604 [2024-12-05 10:08:23.117460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.604 2024/12/05 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.604 [2024-12-05 10:08:23.129422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.604 [2024-12-05 10:08:23.129455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.605 2024/12/05 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.605 [2024-12-05 10:08:23.141389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.605 [2024-12-05 10:08:23.141417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.605 2024/12/05 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.605 [2024-12-05 10:08:23.153418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.605 [2024-12-05 10:08:23.153451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.605 2024/12/05 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.605 [2024-12-05 10:08:23.165414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.605 [2024-12-05 10:08:23.165443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.605 2024/12/05 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.605 [2024-12-05 10:08:23.177387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.605 [2024-12-05 10:08:23.177416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.605 2024/12/05 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.605 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (116120) - No such process 00:36:35.605 10:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 116120 00:36:35.605 10:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.605 10:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.605 10:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:35.605 10:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.605 10:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:35.605 10:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.605 10:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:35.605 delay0 00:36:35.605 10:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.605 10:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:36:35.605 10:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.605 10:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:35.605 10:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.605 10:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:36:35.864 [2024-12-05 10:08:23.448713] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:42.428 Initializing NVMe Controllers 00:36:42.428 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:36:42.428 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:42.428 Initialization complete. Launching workers. 00:36:42.428 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 107 00:36:42.428 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 394, failed to submit 33 00:36:42.428 success 297, unsuccessful 97, failed 0 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:42.428 rmmod nvme_tcp 00:36:42.428 rmmod nvme_fabrics 00:36:42.428 rmmod nvme_keyring 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 115945 ']' 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 115945 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 115945 ']' 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 115945 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115945 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:42.428 killing process with pid 115945 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115945' 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 115945 00:36:42.428 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 115945 00:36:43.363 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:43.363 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:43.363 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:43.363 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:36:43.363 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:36:43.363 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:43.363 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:36:43.363 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:43.363 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:43.363 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:43.363 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:43.363 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:43.363 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:43.363 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:43.363 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:43.363 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:43.364 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:43.364 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:43.364 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:43.364 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:43.364 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:43.364 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:43.622 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:43.622 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:43.622 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:43.622 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:43.622 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:36:43.622 00:36:43.622 real 0m28.045s 00:36:43.622 user 0m42.556s 00:36:43.622 sys 0m8.931s 00:36:43.622 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:43.622 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:43.622 ************************************ 00:36:43.622 END TEST nvmf_zcopy 00:36:43.622 ************************************ 00:36:43.622 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:43.622 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:43.622 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:43.622 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:43.622 ************************************ 00:36:43.622 START TEST nvmf_nmic 00:36:43.622 ************************************ 00:36:43.622 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:43.622 * Looking for test storage... 00:36:43.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:43.622 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:43.622 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:36:43.622 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:43.880 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:43.880 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:43.880 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:43.880 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:43.880 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:36:43.880 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:36:43.880 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:36:43.880 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:36:43.880 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:36:43.880 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:36:43.880 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:36:43.880 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:43.880 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:36:43.880 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:43.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.881 --rc genhtml_branch_coverage=1 00:36:43.881 --rc genhtml_function_coverage=1 00:36:43.881 --rc genhtml_legend=1 00:36:43.881 --rc geninfo_all_blocks=1 00:36:43.881 --rc geninfo_unexecuted_blocks=1 00:36:43.881 00:36:43.881 ' 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:43.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.881 --rc genhtml_branch_coverage=1 00:36:43.881 --rc genhtml_function_coverage=1 00:36:43.881 --rc genhtml_legend=1 00:36:43.881 --rc geninfo_all_blocks=1 00:36:43.881 --rc geninfo_unexecuted_blocks=1 00:36:43.881 00:36:43.881 ' 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:43.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.881 --rc genhtml_branch_coverage=1 00:36:43.881 --rc genhtml_function_coverage=1 00:36:43.881 --rc genhtml_legend=1 00:36:43.881 --rc geninfo_all_blocks=1 00:36:43.881 --rc geninfo_unexecuted_blocks=1 00:36:43.881 00:36:43.881 ' 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:43.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.881 --rc genhtml_branch_coverage=1 00:36:43.881 --rc genhtml_function_coverage=1 00:36:43.881 --rc genhtml_legend=1 00:36:43.881 --rc geninfo_all_blocks=1 00:36:43.881 --rc geninfo_unexecuted_blocks=1 00:36:43.881 00:36:43.881 ' 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:36:43.881 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:43.882 Cannot find device "nvmf_init_br" 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:43.882 Cannot find device "nvmf_init_br2" 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:43.882 Cannot find device "nvmf_tgt_br" 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:43.882 Cannot find device "nvmf_tgt_br2" 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:43.882 Cannot find device "nvmf_init_br" 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:43.882 Cannot find device "nvmf_init_br2" 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:43.882 Cannot find device "nvmf_tgt_br" 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:43.882 Cannot find device "nvmf_tgt_br2" 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:43.882 Cannot find device "nvmf_br" 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:43.882 Cannot find device "nvmf_init_if" 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:43.882 Cannot find device "nvmf_init_if2" 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:43.882 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:43.882 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:43.882 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:44.140 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:44.140 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:36:44.140 00:36:44.140 --- 10.0.0.3 ping statistics --- 00:36:44.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.140 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:44.140 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:44.140 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:36:44.140 00:36:44.140 --- 10.0.0.4 ping statistics --- 00:36:44.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.140 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:44.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:44.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:36:44.140 00:36:44.140 --- 10.0.0.1 ping statistics --- 00:36:44.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.140 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:44.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:44.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:36:44.140 00:36:44.140 --- 10.0.0.2 ping statistics --- 00:36:44.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.140 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:44.140 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:44.398 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:36:44.398 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:44.398 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:44.398 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:44.398 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=116501 00:36:44.398 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:44.398 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 116501 00:36:44.398 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 116501 ']' 00:36:44.398 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:44.398 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:44.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:44.398 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:44.398 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:44.398 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:44.398 [2024-12-05 10:08:31.911509] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:44.398 [2024-12-05 10:08:31.914635] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:36:44.398 [2024-12-05 10:08:31.914763] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:44.655 [2024-12-05 10:08:32.110879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:44.655 [2024-12-05 10:08:32.266587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:44.655 [2024-12-05 10:08:32.266661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:44.655 [2024-12-05 10:08:32.266676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:44.655 [2024-12-05 10:08:32.266688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:44.655 [2024-12-05 10:08:32.266699] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:44.655 [2024-12-05 10:08:32.268754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:44.655 [2024-12-05 10:08:32.268895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:44.655 [2024-12-05 10:08:32.269026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:44.655 [2024-12-05 10:08:32.269038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:45.221 [2024-12-05 10:08:32.555895] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:45.221 [2024-12-05 10:08:32.556869] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:45.221 [2024-12-05 10:08:32.557741] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:45.221 [2024-12-05 10:08:32.558337] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:45.221 [2024-12-05 10:08:32.558657] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:45.480 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:45.480 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:36:45.480 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:45.480 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:45.480 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:45.480 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:45.480 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:45.480 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.480 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:45.480 [2024-12-05 10:08:32.982492] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:45.480 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.480 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:45.480 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.480 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:45.480 Malloc0 00:36:45.480 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.480 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:45.480 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.480 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:45.480 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.480 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:45.480 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.480 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:45.480 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.480 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:36:45.480 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.480 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:45.481 [2024-12-05 10:08:33.098499] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:45.481 test case1: single bdev can't be used in multiple subsystems 00:36:45.481 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.481 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:36:45.481 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:36:45.481 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.481 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:45.739 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.739 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:45.740 [2024-12-05 10:08:33.122108] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:36:45.740 [2024-12-05 10:08:33.122176] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:36:45.740 [2024-12-05 10:08:33.122197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:45.740 2024/12/05 10:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:45.740 request: 00:36:45.740 { 00:36:45.740 "method": "nvmf_subsystem_add_ns", 00:36:45.740 "params": { 00:36:45.740 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:36:45.740 "namespace": { 00:36:45.740 "bdev_name": "Malloc0", 00:36:45.740 "no_auto_visible": false, 00:36:45.740 "hide_metadata": false 00:36:45.740 } 00:36:45.740 } 00:36:45.740 } 00:36:45.740 Got JSON-RPC error response 00:36:45.740 GoRPCClient: error on JSON-RPC call 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:36:45.740 Adding namespace failed - expected result. 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:36:45.740 test case2: host connect to nvmf target in multiple paths 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:45.740 [2024-12-05 10:08:33.134287] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:45.740 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:36:48.281 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:48.281 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:48.281 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:48.281 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:48.281 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:48.281 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:36:48.281 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:48.281 [global] 00:36:48.281 thread=1 00:36:48.281 invalidate=1 00:36:48.281 rw=write 00:36:48.281 time_based=1 00:36:48.281 runtime=1 00:36:48.281 ioengine=libaio 00:36:48.281 direct=1 00:36:48.281 bs=4096 00:36:48.281 iodepth=1 00:36:48.281 norandommap=0 00:36:48.281 numjobs=1 00:36:48.281 00:36:48.281 verify_dump=1 00:36:48.281 verify_backlog=512 00:36:48.281 verify_state_save=0 00:36:48.281 do_verify=1 00:36:48.281 verify=crc32c-intel 00:36:48.281 [job0] 00:36:48.281 filename=/dev/nvme0n1 00:36:48.281 Could not set queue depth (nvme0n1) 00:36:48.281 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:48.281 fio-3.35 00:36:48.281 Starting 1 thread 00:36:49.218 00:36:49.218 job0: (groupid=0, jobs=1): err= 0: pid=116605: Thu Dec 5 10:08:36 2024 00:36:49.218 read: IOPS=2264, BW=9059KiB/s (9276kB/s)(9068KiB/1001msec) 00:36:49.218 slat (nsec): min=13188, max=66220, avg=16168.12, stdev=4795.67 00:36:49.218 clat (usec): min=177, max=2238, avg=221.57, stdev=47.85 00:36:49.218 lat (usec): min=191, max=2252, avg=237.73, stdev=48.17 00:36:49.218 clat percentiles (usec): 00:36:49.218 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 200], 00:36:49.218 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 225], 00:36:49.218 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 260], 00:36:49.218 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 343], 99.95th=[ 437], 00:36:49.218 | 99.99th=[ 2245] 00:36:49.218 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:36:49.218 slat (usec): min=18, max=109, avg=23.68, stdev= 7.22 00:36:49.218 clat (usec): min=122, max=302, avg=153.57, stdev=20.71 00:36:49.218 lat (usec): min=142, max=412, avg=177.25, stdev=23.69 00:36:49.218 clat percentiles (usec): 00:36:49.218 | 1.00th=[ 127], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 137], 00:36:49.218 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 149], 60.00th=[ 153], 00:36:49.218 | 70.00th=[ 161], 80.00th=[ 169], 90.00th=[ 182], 95.00th=[ 194], 00:36:49.218 | 99.00th=[ 219], 99.50th=[ 237], 99.90th=[ 277], 99.95th=[ 281], 00:36:49.218 | 99.99th=[ 302] 00:36:49.218 bw ( KiB/s): min=11256, max=11256, per=100.00%, avg=11256.00, stdev= 0.00, samples=1 00:36:49.218 iops : min= 2814, max= 2814, avg=2814.00, stdev= 0.00, samples=1 00:36:49.218 lat (usec) : 250=94.99%, 500=4.99% 00:36:49.218 lat (msec) : 4=0.02% 00:36:49.218 cpu : usr=1.50%, sys=6.90%, ctx=4827, majf=0, minf=5 00:36:49.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:49.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.218 issued rwts: total=2267,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:49.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:49.218 00:36:49.218 Run status group 0 (all jobs): 00:36:49.218 READ: bw=9059KiB/s (9276kB/s), 9059KiB/s-9059KiB/s (9276kB/s-9276kB/s), io=9068KiB (9286kB), run=1001-1001msec 00:36:49.218 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:36:49.218 00:36:49.218 Disk stats (read/write): 00:36:49.218 nvme0n1: ios=2098/2287, merge=0/0, ticks=492/360, in_queue=852, util=91.78% 00:36:49.218 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:49.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:36:49.218 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:49.218 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:36:49.218 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:49.218 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:49.218 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:49.218 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:49.477 rmmod nvme_tcp 00:36:49.477 rmmod nvme_fabrics 00:36:49.477 rmmod nvme_keyring 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 116501 ']' 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 116501 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 116501 ']' 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 116501 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116501 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:49.477 killing process with pid 116501 00:36:49.477 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116501' 00:36:49.478 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 116501 00:36:49.478 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 116501 00:36:50.414 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:50.414 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:50.414 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:50.414 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:36:50.414 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:36:50.414 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:50.414 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:36:50.414 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:50.414 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:50.414 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:50.414 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:36:50.673 00:36:50.673 real 0m7.161s 00:36:50.673 user 0m16.711s 00:36:50.673 sys 0m2.154s 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:50.673 ************************************ 00:36:50.673 END TEST nvmf_nmic 00:36:50.673 ************************************ 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:50.673 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:50.932 ************************************ 00:36:50.932 START TEST nvmf_fio_target 00:36:50.932 ************************************ 00:36:50.932 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:50.932 * Looking for test storage... 00:36:50.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:50.932 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:50.932 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:36:50.932 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:50.932 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:50.932 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:50.932 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:50.932 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:50.932 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:50.932 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:50.932 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:50.932 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:50.932 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:50.932 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:50.932 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:50.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.933 --rc genhtml_branch_coverage=1 00:36:50.933 --rc genhtml_function_coverage=1 00:36:50.933 --rc genhtml_legend=1 00:36:50.933 --rc geninfo_all_blocks=1 00:36:50.933 --rc geninfo_unexecuted_blocks=1 00:36:50.933 00:36:50.933 ' 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:50.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.933 --rc genhtml_branch_coverage=1 00:36:50.933 --rc genhtml_function_coverage=1 00:36:50.933 --rc genhtml_legend=1 00:36:50.933 --rc geninfo_all_blocks=1 00:36:50.933 --rc geninfo_unexecuted_blocks=1 00:36:50.933 00:36:50.933 ' 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:50.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.933 --rc genhtml_branch_coverage=1 00:36:50.933 --rc genhtml_function_coverage=1 00:36:50.933 --rc genhtml_legend=1 00:36:50.933 --rc geninfo_all_blocks=1 00:36:50.933 --rc geninfo_unexecuted_blocks=1 00:36:50.933 00:36:50.933 ' 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:50.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.933 --rc genhtml_branch_coverage=1 00:36:50.933 --rc genhtml_function_coverage=1 00:36:50.933 --rc genhtml_legend=1 00:36:50.933 --rc geninfo_all_blocks=1 00:36:50.933 --rc geninfo_unexecuted_blocks=1 00:36:50.933 00:36:50.933 ' 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:50.933 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:50.934 Cannot find device "nvmf_init_br" 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:36:50.934 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:50.934 Cannot find device "nvmf_init_br2" 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:51.207 Cannot find device "nvmf_tgt_br" 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:51.207 Cannot find device "nvmf_tgt_br2" 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:51.207 Cannot find device "nvmf_init_br" 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:51.207 Cannot find device "nvmf_init_br2" 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:51.207 Cannot find device "nvmf_tgt_br" 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:51.207 Cannot find device "nvmf_tgt_br2" 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:51.207 Cannot find device "nvmf_br" 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:51.207 Cannot find device "nvmf_init_if" 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:51.207 Cannot find device "nvmf_init_if2" 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:51.207 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:51.207 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:51.207 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:51.208 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:51.208 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:51.208 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:51.208 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:51.208 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:51.208 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:51.208 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:51.208 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:51.208 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:51.208 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:51.208 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:51.208 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:51.208 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:51.208 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:51.208 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:51.208 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:51.208 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:51.467 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:51.468 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:51.468 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:36:51.468 00:36:51.468 --- 10.0.0.3 ping statistics --- 00:36:51.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:51.468 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:51.468 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:51.468 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:36:51.468 00:36:51.468 --- 10.0.0.4 ping statistics --- 00:36:51.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:51.468 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:51.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:51.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:36:51.468 00:36:51.468 --- 10.0.0.1 ping statistics --- 00:36:51.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:51.468 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:51.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:51.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:36:51.468 00:36:51.468 --- 10.0.0.2 ping statistics --- 00:36:51.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:51.468 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=116849 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 116849 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 116849 ']' 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:51.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:51.468 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:51.726 [2024-12-05 10:08:39.112705] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:51.726 [2024-12-05 10:08:39.115909] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:36:51.726 [2024-12-05 10:08:39.116068] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:51.726 [2024-12-05 10:08:39.313116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:51.984 [2024-12-05 10:08:39.473814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:51.984 [2024-12-05 10:08:39.473904] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:51.984 [2024-12-05 10:08:39.473947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:51.984 [2024-12-05 10:08:39.473972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:51.984 [2024-12-05 10:08:39.473990] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:51.984 [2024-12-05 10:08:39.476678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:51.984 [2024-12-05 10:08:39.476840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:51.984 [2024-12-05 10:08:39.476998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:51.984 [2024-12-05 10:08:39.477461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:52.242 [2024-12-05 10:08:39.790491] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:52.242 [2024-12-05 10:08:39.791286] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:52.242 [2024-12-05 10:08:39.792616] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:52.242 [2024-12-05 10:08:39.792808] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:52.242 [2024-12-05 10:08:39.793161] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:52.501 10:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:52.501 10:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:36:52.501 10:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:52.501 10:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:52.501 10:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:52.501 10:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:52.501 10:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:53.069 [2024-12-05 10:08:40.406825] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:53.069 10:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:53.328 10:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:36:53.328 10:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:53.586 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:36:53.586 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:54.153 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:36:54.153 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:54.412 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:36:54.412 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:36:54.671 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:54.930 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:36:54.930 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:55.189 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:36:55.189 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:55.448 10:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:36:55.448 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:36:55.707 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:55.966 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:55.966 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:56.225 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:56.225 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:56.484 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:36:56.743 [2024-12-05 10:08:44.218627] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:56.743 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:36:57.003 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:36:57.261 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:36:57.261 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:36:57.261 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:36:57.261 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:57.261 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:36:57.262 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:36:57.262 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:36:59.164 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:59.164 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:59.164 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:59.164 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:36:59.164 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:59.164 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:36:59.164 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:59.423 [global] 00:36:59.423 thread=1 00:36:59.423 invalidate=1 00:36:59.423 rw=write 00:36:59.423 time_based=1 00:36:59.423 runtime=1 00:36:59.423 ioengine=libaio 00:36:59.423 direct=1 00:36:59.423 bs=4096 00:36:59.423 iodepth=1 00:36:59.423 norandommap=0 00:36:59.423 numjobs=1 00:36:59.423 00:36:59.423 verify_dump=1 00:36:59.423 verify_backlog=512 00:36:59.423 verify_state_save=0 00:36:59.423 do_verify=1 00:36:59.423 verify=crc32c-intel 00:36:59.423 [job0] 00:36:59.423 filename=/dev/nvme0n1 00:36:59.423 [job1] 00:36:59.423 filename=/dev/nvme0n2 00:36:59.423 [job2] 00:36:59.423 filename=/dev/nvme0n3 00:36:59.423 [job3] 00:36:59.423 filename=/dev/nvme0n4 00:36:59.423 Could not set queue depth (nvme0n1) 00:36:59.423 Could not set queue depth (nvme0n2) 00:36:59.423 Could not set queue depth (nvme0n3) 00:36:59.423 Could not set queue depth (nvme0n4) 00:36:59.424 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:59.424 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:59.424 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:59.424 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:59.424 fio-3.35 00:36:59.424 Starting 4 threads 00:37:00.803 00:37:00.803 job0: (groupid=0, jobs=1): err= 0: pid=117141: Thu Dec 5 10:08:48 2024 00:37:00.803 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:37:00.803 slat (nsec): min=8685, max=48700, avg=15973.56, stdev=4487.29 00:37:00.803 clat (usec): min=248, max=691, avg=455.63, stdev=52.67 00:37:00.803 lat (usec): min=261, max=705, avg=471.60, stdev=52.99 00:37:00.803 clat percentiles (usec): 00:37:00.803 | 1.00th=[ 269], 5.00th=[ 392], 10.00th=[ 412], 20.00th=[ 429], 00:37:00.803 | 30.00th=[ 437], 40.00th=[ 449], 50.00th=[ 453], 60.00th=[ 461], 00:37:00.803 | 70.00th=[ 474], 80.00th=[ 486], 90.00th=[ 506], 95.00th=[ 537], 00:37:00.803 | 99.00th=[ 619], 99.50th=[ 635], 99.90th=[ 676], 99.95th=[ 693], 00:37:00.803 | 99.99th=[ 693] 00:37:00.803 write: IOPS=1342, BW=5371KiB/s (5500kB/s)(5376KiB/1001msec); 0 zone resets 00:37:00.803 slat (nsec): min=19262, max=93933, avg=28450.97, stdev=8826.75 00:37:00.803 clat (usec): min=186, max=599, avg=352.97, stdev=48.97 00:37:00.803 lat (usec): min=221, max=627, avg=381.42, stdev=47.20 00:37:00.803 clat percentiles (usec): 00:37:00.803 | 1.00th=[ 212], 5.00th=[ 281], 10.00th=[ 293], 20.00th=[ 318], 00:37:00.803 | 30.00th=[ 330], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 367], 00:37:00.803 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 404], 95.00th=[ 429], 00:37:00.803 | 99.00th=[ 502], 99.50th=[ 523], 99.90th=[ 603], 99.95th=[ 603], 00:37:00.803 | 99.99th=[ 603] 00:37:00.803 bw ( KiB/s): min= 5328, max= 5328, per=19.72%, avg=5328.00, stdev= 0.00, samples=1 00:37:00.804 iops : min= 1332, max= 1332, avg=1332.00, stdev= 0.00, samples=1 00:37:00.804 lat (usec) : 250=1.44%, 500=92.53%, 750=6.04% 00:37:00.804 cpu : usr=1.00%, sys=4.20%, ctx=2370, majf=0, minf=11 00:37:00.804 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:00.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.804 issued rwts: total=1024,1344,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.804 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:00.804 job1: (groupid=0, jobs=1): err= 0: pid=117142: Thu Dec 5 10:08:48 2024 00:37:00.804 read: IOPS=1883, BW=7532KiB/s (7713kB/s)(7540KiB/1001msec) 00:37:00.804 slat (nsec): min=13371, max=53255, avg=16984.36, stdev=4198.62 00:37:00.804 clat (usec): min=202, max=455, avg=263.66, stdev=29.03 00:37:00.804 lat (usec): min=218, max=471, avg=280.64, stdev=29.82 00:37:00.804 clat percentiles (usec): 00:37:00.804 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 241], 00:37:00.804 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 269], 00:37:00.804 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 318], 00:37:00.804 | 99.00th=[ 343], 99.50th=[ 351], 99.90th=[ 375], 99.95th=[ 457], 00:37:00.804 | 99.99th=[ 457] 00:37:00.804 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:37:00.804 slat (nsec): min=19024, max=85679, avg=25698.99, stdev=6923.45 00:37:00.804 clat (usec): min=140, max=588, avg=201.04, stdev=28.76 00:37:00.804 lat (usec): min=159, max=607, avg=226.74, stdev=30.88 00:37:00.804 clat percentiles (usec): 00:37:00.804 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 178], 00:37:00.804 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:37:00.804 | 70.00th=[ 212], 80.00th=[ 223], 90.00th=[ 239], 95.00th=[ 253], 00:37:00.804 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 347], 99.95th=[ 375], 00:37:00.804 | 99.99th=[ 586] 00:37:00.804 bw ( KiB/s): min= 8192, max= 8192, per=30.32%, avg=8192.00, stdev= 0.00, samples=1 00:37:00.804 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:37:00.804 lat (usec) : 250=66.13%, 500=33.84%, 750=0.03% 00:37:00.804 cpu : usr=1.40%, sys=6.10%, ctx=3934, majf=0, minf=9 00:37:00.804 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:00.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.804 issued rwts: total=1885,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.804 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:00.804 job2: (groupid=0, jobs=1): err= 0: pid=117143: Thu Dec 5 10:08:48 2024 00:37:00.804 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:37:00.804 slat (nsec): min=12254, max=61859, avg=16912.51, stdev=5120.13 00:37:00.804 clat (usec): min=264, max=854, avg=454.81, stdev=48.59 00:37:00.804 lat (usec): min=277, max=916, avg=471.72, stdev=48.86 00:37:00.804 clat percentiles (usec): 00:37:00.804 | 1.00th=[ 293], 5.00th=[ 388], 10.00th=[ 412], 20.00th=[ 429], 00:37:00.804 | 30.00th=[ 437], 40.00th=[ 445], 50.00th=[ 453], 60.00th=[ 461], 00:37:00.804 | 70.00th=[ 474], 80.00th=[ 482], 90.00th=[ 502], 95.00th=[ 523], 00:37:00.804 | 99.00th=[ 611], 99.50th=[ 635], 99.90th=[ 693], 99.95th=[ 857], 00:37:00.804 | 99.99th=[ 857] 00:37:00.804 write: IOPS=1341, BW=5367KiB/s (5495kB/s)(5372KiB/1001msec); 0 zone resets 00:37:00.804 slat (usec): min=13, max=249, avg=24.11, stdev=10.09 00:37:00.804 clat (usec): min=225, max=620, avg=357.90, stdev=42.01 00:37:00.804 lat (usec): min=272, max=639, avg=382.00, stdev=41.86 00:37:00.804 clat percentiles (usec): 00:37:00.804 | 1.00th=[ 277], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 322], 00:37:00.804 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 367], 00:37:00.804 | 70.00th=[ 379], 80.00th=[ 392], 90.00th=[ 408], 95.00th=[ 429], 00:37:00.804 | 99.00th=[ 482], 99.50th=[ 498], 99.90th=[ 562], 99.95th=[ 619], 00:37:00.804 | 99.99th=[ 619] 00:37:00.804 bw ( KiB/s): min= 5336, max= 5336, per=19.75%, avg=5336.00, stdev= 0.00, samples=1 00:37:00.804 iops : min= 1334, max= 1334, avg=1334.00, stdev= 0.00, samples=1 00:37:00.804 lat (usec) : 250=0.04%, 500=95.18%, 750=4.73%, 1000=0.04% 00:37:00.804 cpu : usr=1.20%, sys=3.30%, ctx=2369, majf=0, minf=11 00:37:00.804 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:00.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.804 issued rwts: total=1024,1343,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.804 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:00.804 job3: (groupid=0, jobs=1): err= 0: pid=117144: Thu Dec 5 10:08:48 2024 00:37:00.804 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:37:00.804 slat (nsec): min=11883, max=76170, avg=17314.96, stdev=6703.10 00:37:00.804 clat (usec): min=216, max=2446, avg=286.40, stdev=69.92 00:37:00.804 lat (usec): min=231, max=2461, avg=303.72, stdev=71.17 00:37:00.804 clat percentiles (usec): 00:37:00.804 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 253], 00:37:00.804 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:37:00.804 | 70.00th=[ 302], 80.00th=[ 314], 90.00th=[ 338], 95.00th=[ 359], 00:37:00.804 | 99.00th=[ 400], 99.50th=[ 433], 99.90th=[ 1020], 99.95th=[ 2442], 00:37:00.804 | 99.99th=[ 2442] 00:37:00.804 write: IOPS=2023, BW=8096KiB/s (8290kB/s)(8104KiB/1001msec); 0 zone resets 00:37:00.804 slat (nsec): min=17429, max=96939, avg=27820.71, stdev=9895.00 00:37:00.804 clat (usec): min=159, max=1221, avg=232.04, stdev=54.13 00:37:00.804 lat (usec): min=178, max=1278, avg=259.86, stdev=59.62 00:37:00.804 clat percentiles (usec): 00:37:00.804 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 194], 00:37:00.804 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 233], 00:37:00.804 | 70.00th=[ 249], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 297], 00:37:00.804 | 99.00th=[ 478], 99.50th=[ 506], 99.90th=[ 545], 99.95th=[ 553], 00:37:00.804 | 99.99th=[ 1221] 00:37:00.804 bw ( KiB/s): min= 8192, max= 8192, per=30.32%, avg=8192.00, stdev= 0.00, samples=1 00:37:00.804 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:37:00.804 lat (usec) : 250=47.47%, 500=52.08%, 750=0.36% 00:37:00.804 lat (msec) : 2=0.06%, 4=0.03% 00:37:00.804 cpu : usr=1.00%, sys=6.70%, ctx=3562, majf=0, minf=7 00:37:00.804 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:00.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.804 issued rwts: total=1536,2026,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.804 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:00.804 00:37:00.804 Run status group 0 (all jobs): 00:37:00.804 READ: bw=21.3MiB/s (22.4MB/s), 4092KiB/s-7532KiB/s (4190kB/s-7713kB/s), io=21.4MiB (22.4MB), run=1001-1001msec 00:37:00.804 WRITE: bw=26.4MiB/s (27.7MB/s), 5367KiB/s-8184KiB/s (5495kB/s-8380kB/s), io=26.4MiB (27.7MB), run=1001-1001msec 00:37:00.804 00:37:00.804 Disk stats (read/write): 00:37:00.804 nvme0n1: ios=1035/1024, merge=0/0, ticks=469/372, in_queue=841, util=88.08% 00:37:00.804 nvme0n2: ios=1577/1843, merge=0/0, ticks=477/390, in_queue=867, util=89.45% 00:37:00.804 nvme0n3: ios=989/1024, merge=0/0, ticks=459/360, in_queue=819, util=89.03% 00:37:00.804 nvme0n4: ios=1475/1536, merge=0/0, ticks=433/385, in_queue=818, util=89.78% 00:37:00.804 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:37:00.804 [global] 00:37:00.804 thread=1 00:37:00.804 invalidate=1 00:37:00.804 rw=randwrite 00:37:00.804 time_based=1 00:37:00.804 runtime=1 00:37:00.804 ioengine=libaio 00:37:00.804 direct=1 00:37:00.804 bs=4096 00:37:00.804 iodepth=1 00:37:00.804 norandommap=0 00:37:00.804 numjobs=1 00:37:00.804 00:37:00.804 verify_dump=1 00:37:00.804 verify_backlog=512 00:37:00.804 verify_state_save=0 00:37:00.804 do_verify=1 00:37:00.804 verify=crc32c-intel 00:37:00.804 [job0] 00:37:00.804 filename=/dev/nvme0n1 00:37:00.804 [job1] 00:37:00.804 filename=/dev/nvme0n2 00:37:00.804 [job2] 00:37:00.804 filename=/dev/nvme0n3 00:37:00.804 [job3] 00:37:00.804 filename=/dev/nvme0n4 00:37:00.804 Could not set queue depth (nvme0n1) 00:37:00.804 Could not set queue depth (nvme0n2) 00:37:00.804 Could not set queue depth (nvme0n3) 00:37:00.804 Could not set queue depth (nvme0n4) 00:37:00.804 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:00.804 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:00.804 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:00.804 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:00.804 fio-3.35 00:37:00.804 Starting 4 threads 00:37:02.182 00:37:02.182 job0: (groupid=0, jobs=1): err= 0: pid=117197: Thu Dec 5 10:08:49 2024 00:37:02.182 read: IOPS=1535, BW=6142KiB/s (6289kB/s)(6148KiB/1001msec) 00:37:02.182 slat (nsec): min=12681, max=54117, avg=16516.02, stdev=4195.73 00:37:02.182 clat (usec): min=206, max=839, avg=292.88, stdev=37.26 00:37:02.182 lat (usec): min=221, max=855, avg=309.40, stdev=37.41 00:37:02.182 clat percentiles (usec): 00:37:02.182 | 1.00th=[ 223], 5.00th=[ 239], 10.00th=[ 249], 20.00th=[ 265], 00:37:02.182 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 302], 00:37:02.182 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 338], 95.00th=[ 351], 00:37:02.182 | 99.00th=[ 375], 99.50th=[ 383], 99.90th=[ 412], 99.95th=[ 840], 00:37:02.182 | 99.99th=[ 840] 00:37:02.182 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:37:02.182 slat (usec): min=18, max=120, avg=25.21, stdev= 7.11 00:37:02.182 clat (usec): min=159, max=2333, avg=227.91, stdev=59.92 00:37:02.182 lat (usec): min=179, max=2361, avg=253.12, stdev=60.69 00:37:02.182 clat percentiles (usec): 00:37:02.183 | 1.00th=[ 172], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 200], 00:37:02.183 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 231], 00:37:02.183 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 277], 00:37:02.183 | 99.00th=[ 343], 99.50th=[ 359], 99.90th=[ 635], 99.95th=[ 1037], 00:37:02.183 | 99.99th=[ 2343] 00:37:02.183 bw ( KiB/s): min= 8192, max= 8192, per=25.19%, avg=8192.00, stdev= 0.00, samples=1 00:37:02.183 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:37:02.183 lat (usec) : 250=50.99%, 500=48.87%, 750=0.06%, 1000=0.03% 00:37:02.183 lat (msec) : 2=0.03%, 4=0.03% 00:37:02.183 cpu : usr=1.30%, sys=5.60%, ctx=3586, majf=0, minf=9 00:37:02.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:02.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.183 issued rwts: total=1537,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:02.183 job1: (groupid=0, jobs=1): err= 0: pid=117198: Thu Dec 5 10:08:49 2024 00:37:02.183 read: IOPS=1991, BW=7964KiB/s (8155kB/s)(7972KiB/1001msec) 00:37:02.183 slat (nsec): min=13316, max=69335, avg=16319.17, stdev=4159.60 00:37:02.183 clat (usec): min=179, max=2107, avg=256.04, stdev=68.01 00:37:02.183 lat (usec): min=194, max=2121, avg=272.36, stdev=68.49 00:37:02.183 clat percentiles (usec): 00:37:02.183 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 227], 00:37:02.183 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 260], 00:37:02.183 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 314], 00:37:02.183 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 2057], 99.95th=[ 2114], 00:37:02.183 | 99.99th=[ 2114] 00:37:02.183 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:37:02.183 slat (usec): min=18, max=105, avg=23.62, stdev= 5.82 00:37:02.183 clat (usec): min=126, max=744, avg=196.39, stdev=33.36 00:37:02.183 lat (usec): min=147, max=764, avg=220.01, stdev=34.66 00:37:02.183 clat percentiles (usec): 00:37:02.183 | 1.00th=[ 137], 5.00th=[ 149], 10.00th=[ 157], 20.00th=[ 169], 00:37:02.183 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 196], 60.00th=[ 202], 00:37:02.183 | 70.00th=[ 212], 80.00th=[ 223], 90.00th=[ 239], 95.00th=[ 249], 00:37:02.183 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 310], 99.95th=[ 420], 00:37:02.183 | 99.99th=[ 742] 00:37:02.183 bw ( KiB/s): min= 8192, max= 8192, per=25.19%, avg=8192.00, stdev= 0.00, samples=1 00:37:02.183 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:37:02.183 lat (usec) : 250=72.46%, 500=27.42%, 750=0.05%, 1000=0.02% 00:37:02.183 lat (msec) : 4=0.05% 00:37:02.183 cpu : usr=1.70%, sys=5.30%, ctx=4042, majf=0, minf=21 00:37:02.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:02.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.183 issued rwts: total=1993,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:02.183 job2: (groupid=0, jobs=1): err= 0: pid=117199: Thu Dec 5 10:08:49 2024 00:37:02.183 read: IOPS=1712, BW=6849KiB/s (7014kB/s)(6856KiB/1001msec) 00:37:02.183 slat (nsec): min=11540, max=51702, avg=17688.81, stdev=4873.58 00:37:02.183 clat (usec): min=203, max=681, avg=272.03, stdev=34.63 00:37:02.183 lat (usec): min=217, max=695, avg=289.72, stdev=35.30 00:37:02.183 clat percentiles (usec): 00:37:02.183 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 243], 00:37:02.183 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 277], 00:37:02.183 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 318], 95.00th=[ 330], 00:37:02.183 | 99.00th=[ 359], 99.50th=[ 367], 99.90th=[ 429], 99.95th=[ 685], 00:37:02.183 | 99.99th=[ 685] 00:37:02.183 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:37:02.183 slat (usec): min=16, max=107, avg=26.13, stdev= 6.57 00:37:02.183 clat (usec): min=149, max=680, avg=216.24, stdev=32.69 00:37:02.183 lat (usec): min=166, max=706, avg=242.37, stdev=33.76 00:37:02.183 clat percentiles (usec): 00:37:02.183 | 1.00th=[ 161], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 192], 00:37:02.183 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 221], 00:37:02.183 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 253], 95.00th=[ 265], 00:37:02.183 | 99.00th=[ 293], 99.50th=[ 326], 99.90th=[ 494], 99.95th=[ 545], 00:37:02.183 | 99.99th=[ 685] 00:37:02.183 bw ( KiB/s): min= 8192, max= 8192, per=25.19%, avg=8192.00, stdev= 0.00, samples=1 00:37:02.183 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:37:02.183 lat (usec) : 250=60.07%, 500=39.85%, 750=0.08% 00:37:02.183 cpu : usr=1.60%, sys=6.30%, ctx=3763, majf=0, minf=7 00:37:02.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:02.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.183 issued rwts: total=1714,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:02.183 job3: (groupid=0, jobs=1): err= 0: pid=117200: Thu Dec 5 10:08:49 2024 00:37:02.183 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:37:02.183 slat (nsec): min=15602, max=58868, avg=19744.70, stdev=3586.95 00:37:02.183 clat (usec): min=207, max=396, avg=293.05, stdev=32.70 00:37:02.183 lat (usec): min=227, max=428, avg=312.79, stdev=33.01 00:37:02.183 clat percentiles (usec): 00:37:02.183 | 1.00th=[ 223], 5.00th=[ 241], 10.00th=[ 251], 20.00th=[ 265], 00:37:02.183 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 302], 00:37:02.183 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 338], 95.00th=[ 351], 00:37:02.183 | 99.00th=[ 371], 99.50th=[ 383], 99.90th=[ 396], 99.95th=[ 396], 00:37:02.183 | 99.99th=[ 396] 00:37:02.183 write: IOPS=1992, BW=7968KiB/s (8159kB/s)(7976KiB/1001msec); 0 zone resets 00:37:02.183 slat (usec): min=22, max=101, avg=28.86, stdev= 5.54 00:37:02.183 clat (usec): min=154, max=547, avg=227.84, stdev=31.75 00:37:02.183 lat (usec): min=184, max=575, avg=256.69, stdev=32.32 00:37:02.183 clat percentiles (usec): 00:37:02.183 | 1.00th=[ 169], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 204], 00:37:02.183 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 233], 00:37:02.183 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 281], 00:37:02.183 | 99.00th=[ 318], 99.50th=[ 367], 99.90th=[ 474], 99.95th=[ 545], 00:37:02.183 | 99.99th=[ 545] 00:37:02.183 bw ( KiB/s): min= 8192, max= 8192, per=25.19%, avg=8192.00, stdev= 0.00, samples=1 00:37:02.183 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:37:02.183 lat (usec) : 250=49.01%, 500=50.96%, 750=0.03% 00:37:02.183 cpu : usr=1.90%, sys=6.40%, ctx=3530, majf=0, minf=7 00:37:02.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:02.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.183 issued rwts: total=1536,1994,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:02.183 00:37:02.183 Run status group 0 (all jobs): 00:37:02.183 READ: bw=26.5MiB/s (27.7MB/s), 6138KiB/s-7964KiB/s (6285kB/s-8155kB/s), io=26.5MiB (27.8MB), run=1001-1001msec 00:37:02.183 WRITE: bw=31.8MiB/s (33.3MB/s), 7968KiB/s-8184KiB/s (8159kB/s-8380kB/s), io=31.8MiB (33.3MB), run=1001-1001msec 00:37:02.183 00:37:02.183 Disk stats (read/write): 00:37:02.183 nvme0n1: ios=1550/1536, merge=0/0, ticks=474/366, in_queue=840, util=87.88% 00:37:02.183 nvme0n2: ios=1585/1967, merge=0/0, ticks=463/397, in_queue=860, util=90.10% 00:37:02.183 nvme0n3: ios=1563/1674, merge=0/0, ticks=469/371, in_queue=840, util=89.86% 00:37:02.183 nvme0n4: ios=1464/1536, merge=0/0, ticks=432/361, in_queue=793, util=89.69% 00:37:02.183 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:37:02.183 [global] 00:37:02.183 thread=1 00:37:02.183 invalidate=1 00:37:02.183 rw=write 00:37:02.183 time_based=1 00:37:02.183 runtime=1 00:37:02.183 ioengine=libaio 00:37:02.183 direct=1 00:37:02.183 bs=4096 00:37:02.183 iodepth=128 00:37:02.183 norandommap=0 00:37:02.183 numjobs=1 00:37:02.183 00:37:02.183 verify_dump=1 00:37:02.183 verify_backlog=512 00:37:02.183 verify_state_save=0 00:37:02.183 do_verify=1 00:37:02.183 verify=crc32c-intel 00:37:02.183 [job0] 00:37:02.183 filename=/dev/nvme0n1 00:37:02.183 [job1] 00:37:02.183 filename=/dev/nvme0n2 00:37:02.183 [job2] 00:37:02.183 filename=/dev/nvme0n3 00:37:02.183 [job3] 00:37:02.183 filename=/dev/nvme0n4 00:37:02.183 Could not set queue depth (nvme0n1) 00:37:02.183 Could not set queue depth (nvme0n2) 00:37:02.183 Could not set queue depth (nvme0n3) 00:37:02.183 Could not set queue depth (nvme0n4) 00:37:02.183 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:02.183 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:02.183 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:02.183 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:02.183 fio-3.35 00:37:02.183 Starting 4 threads 00:37:03.560 00:37:03.560 job0: (groupid=0, jobs=1): err= 0: pid=117255: Thu Dec 5 10:08:50 2024 00:37:03.560 read: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec) 00:37:03.560 slat (usec): min=3, max=11179, avg=300.52, stdev=1350.77 00:37:03.560 clat (usec): min=24141, max=56359, avg=37117.98, stdev=4831.45 00:37:03.560 lat (usec): min=24156, max=56375, avg=37418.51, stdev=4761.22 00:37:03.560 clat percentiles (usec): 00:37:03.560 | 1.00th=[27395], 5.00th=[30016], 10.00th=[31065], 20.00th=[33162], 00:37:03.560 | 30.00th=[34341], 40.00th=[35914], 50.00th=[36963], 60.00th=[38011], 00:37:03.560 | 70.00th=[39060], 80.00th=[40633], 90.00th=[42730], 95.00th=[44827], 00:37:03.560 | 99.00th=[56361], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:37:03.560 | 99.99th=[56361] 00:37:03.560 write: IOPS=1992, BW=7968KiB/s (8160kB/s)(8040KiB/1009msec); 0 zone resets 00:37:03.560 slat (usec): min=8, max=9934, avg=260.65, stdev=1239.22 00:37:03.560 clat (usec): min=8359, max=53290, avg=34911.12, stdev=7033.66 00:37:03.560 lat (usec): min=8389, max=53313, avg=35171.77, stdev=6977.73 00:37:03.560 clat percentiles (usec): 00:37:03.560 | 1.00th=[11731], 5.00th=[19268], 10.00th=[26608], 20.00th=[33162], 00:37:03.560 | 30.00th=[35390], 40.00th=[35914], 50.00th=[36439], 60.00th=[36963], 00:37:03.560 | 70.00th=[37487], 80.00th=[38011], 90.00th=[40109], 95.00th=[42730], 00:37:03.560 | 99.00th=[53216], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:37:03.560 | 99.99th=[53216] 00:37:03.560 bw ( KiB/s): min= 6872, max= 8208, per=15.97%, avg=7540.00, stdev=944.69, samples=2 00:37:03.560 iops : min= 1718, max= 2052, avg=1885.00, stdev=236.17, samples=2 00:37:03.560 lat (msec) : 10=0.31%, 20=3.72%, 50=93.77%, 100=2.20% 00:37:03.560 cpu : usr=1.59%, sys=5.75%, ctx=353, majf=0, minf=9 00:37:03.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:37:03.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:03.560 issued rwts: total=1536,2010,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:03.560 job1: (groupid=0, jobs=1): err= 0: pid=117256: Thu Dec 5 10:08:50 2024 00:37:03.560 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:37:03.560 slat (usec): min=6, max=6443, avg=112.99, stdev=571.63 00:37:03.560 clat (usec): min=9047, max=26322, avg=14776.56, stdev=2095.27 00:37:03.560 lat (usec): min=9074, max=26335, avg=14889.56, stdev=2107.27 00:37:03.560 clat percentiles (usec): 00:37:03.560 | 1.00th=[10159], 5.00th=[11600], 10.00th=[11994], 20.00th=[12780], 00:37:03.560 | 30.00th=[13435], 40.00th=[14091], 50.00th=[14746], 60.00th=[15664], 00:37:03.560 | 70.00th=[16057], 80.00th=[16581], 90.00th=[17433], 95.00th=[17957], 00:37:03.560 | 99.00th=[19268], 99.50th=[19792], 99.90th=[20579], 99.95th=[20841], 00:37:03.560 | 99.99th=[26346] 00:37:03.560 write: IOPS=4268, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1008msec); 0 zone resets 00:37:03.560 slat (usec): min=7, max=11352, avg=117.23, stdev=617.13 00:37:03.560 clat (usec): min=6848, max=35101, avg=15297.98, stdev=3018.42 00:37:03.560 lat (usec): min=7921, max=35132, avg=15415.21, stdev=3069.89 00:37:03.560 clat percentiles (usec): 00:37:03.560 | 1.00th=[10159], 5.00th=[11994], 10.00th=[13304], 20.00th=[13960], 00:37:03.560 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14746], 60.00th=[15139], 00:37:03.560 | 70.00th=[15533], 80.00th=[16319], 90.00th=[17433], 95.00th=[18744], 00:37:03.560 | 99.00th=[29754], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:37:03.560 | 99.99th=[34866] 00:37:03.560 bw ( KiB/s): min=16384, max=17024, per=35.38%, avg=16704.00, stdev=452.55, samples=2 00:37:03.560 iops : min= 4096, max= 4256, avg=4176.00, stdev=113.14, samples=2 00:37:03.560 lat (msec) : 10=0.79%, 20=96.94%, 50=2.27% 00:37:03.560 cpu : usr=3.77%, sys=11.72%, ctx=468, majf=0, minf=11 00:37:03.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:37:03.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:03.560 issued rwts: total=4096,4303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:03.560 job2: (groupid=0, jobs=1): err= 0: pid=117257: Thu Dec 5 10:08:50 2024 00:37:03.560 read: IOPS=1520, BW=6083KiB/s (6229kB/s)(6144KiB/1010msec) 00:37:03.560 slat (usec): min=4, max=11408, avg=294.88, stdev=1311.02 00:37:03.560 clat (usec): min=27001, max=53596, avg=36918.61, stdev=3575.15 00:37:03.560 lat (usec): min=28916, max=53611, avg=37213.48, stdev=3531.43 00:37:03.560 clat percentiles (usec): 00:37:03.560 | 1.00th=[29230], 5.00th=[31065], 10.00th=[32375], 20.00th=[33817], 00:37:03.560 | 30.00th=[35390], 40.00th=[35914], 50.00th=[36439], 60.00th=[37487], 00:37:03.560 | 70.00th=[38536], 80.00th=[40109], 90.00th=[41157], 95.00th=[42730], 00:37:03.560 | 99.00th=[46924], 99.50th=[47973], 99.90th=[53740], 99.95th=[53740], 00:37:03.560 | 99.99th=[53740] 00:37:03.560 write: IOPS=1924, BW=7699KiB/s (7884kB/s)(7776KiB/1010msec); 0 zone resets 00:37:03.560 slat (usec): min=10, max=10936, avg=273.97, stdev=1282.25 00:37:03.560 clat (usec): min=9282, max=56018, avg=35951.41, stdev=4863.49 00:37:03.560 lat (usec): min=14922, max=56043, avg=36225.38, stdev=4736.59 00:37:03.560 clat percentiles (usec): 00:37:03.560 | 1.00th=[17695], 5.00th=[27395], 10.00th=[30278], 20.00th=[34341], 00:37:03.560 | 30.00th=[35390], 40.00th=[35914], 50.00th=[36439], 60.00th=[36439], 00:37:03.560 | 70.00th=[36963], 80.00th=[38011], 90.00th=[38536], 95.00th=[44303], 00:37:03.560 | 99.00th=[49021], 99.50th=[51119], 99.90th=[55837], 99.95th=[55837], 00:37:03.560 | 99.99th=[55837] 00:37:03.560 bw ( KiB/s): min= 6336, max= 8208, per=15.40%, avg=7272.00, stdev=1323.70, samples=2 00:37:03.560 iops : min= 1584, max= 2052, avg=1818.00, stdev=330.93, samples=2 00:37:03.560 lat (msec) : 10=0.03%, 20=0.92%, 50=98.65%, 100=0.40% 00:37:03.560 cpu : usr=1.49%, sys=6.14%, ctx=357, majf=0, minf=16 00:37:03.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:37:03.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:03.560 issued rwts: total=1536,1944,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:03.560 job3: (groupid=0, jobs=1): err= 0: pid=117258: Thu Dec 5 10:08:50 2024 00:37:03.560 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:37:03.560 slat (usec): min=5, max=4934, avg=136.04, stdev=663.87 00:37:03.560 clat (usec): min=12949, max=22572, avg=17767.72, stdev=1317.41 00:37:03.560 lat (usec): min=14134, max=23033, avg=17903.76, stdev=1170.73 00:37:03.560 clat percentiles (usec): 00:37:03.560 | 1.00th=[13698], 5.00th=[15533], 10.00th=[16450], 20.00th=[16909], 00:37:03.560 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17957], 60.00th=[17957], 00:37:03.560 | 70.00th=[18220], 80.00th=[18744], 90.00th=[19530], 95.00th=[19792], 00:37:03.560 | 99.00th=[20317], 99.50th=[20317], 99.90th=[20317], 99.95th=[22414], 00:37:03.560 | 99.99th=[22676] 00:37:03.560 write: IOPS=3645, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1005msec); 0 zone resets 00:37:03.560 slat (usec): min=13, max=5399, avg=131.83, stdev=577.97 00:37:03.560 clat (usec): min=3967, max=22803, avg=17157.28, stdev=2216.32 00:37:03.560 lat (usec): min=4909, max=22825, avg=17289.11, stdev=2201.73 00:37:03.560 clat percentiles (usec): 00:37:03.560 | 1.00th=[10290], 5.00th=[13960], 10.00th=[14484], 20.00th=[15139], 00:37:03.560 | 30.00th=[15795], 40.00th=[17171], 50.00th=[17695], 60.00th=[18220], 00:37:03.560 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19530], 95.00th=[19792], 00:37:03.560 | 99.00th=[21365], 99.50th=[21627], 99.90th=[22676], 99.95th=[22676], 00:37:03.560 | 99.99th=[22676] 00:37:03.560 bw ( KiB/s): min=13000, max=15703, per=30.40%, avg=14351.50, stdev=1911.31, samples=2 00:37:03.560 iops : min= 3250, max= 3925, avg=3587.50, stdev=477.30, samples=2 00:37:03.560 lat (msec) : 4=0.01%, 10=0.44%, 20=95.93%, 50=3.61% 00:37:03.560 cpu : usr=3.39%, sys=10.66%, ctx=352, majf=0, minf=7 00:37:03.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:37:03.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:03.560 issued rwts: total=3584,3664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:03.560 00:37:03.560 Run status group 0 (all jobs): 00:37:03.560 READ: bw=41.6MiB/s (43.6MB/s), 6083KiB/s-15.9MiB/s (6229kB/s-16.6MB/s), io=42.0MiB (44.0MB), run=1005-1010msec 00:37:03.560 WRITE: bw=46.1MiB/s (48.3MB/s), 7699KiB/s-16.7MiB/s (7884kB/s-17.5MB/s), io=46.6MiB (48.8MB), run=1005-1010msec 00:37:03.560 00:37:03.560 Disk stats (read/write): 00:37:03.560 nvme0n1: ios=1519/1536, merge=0/0, ticks=13741/11328, in_queue=25069, util=87.98% 00:37:03.560 nvme0n2: ios=3576/3584, merge=0/0, ticks=25081/24658, in_queue=49739, util=89.79% 00:37:03.560 nvme0n3: ios=1448/1536, merge=0/0, ticks=12737/13002, in_queue=25739, util=89.37% 00:37:03.560 nvme0n4: ios=3072/3119, merge=0/0, ticks=12695/12290, in_queue=24985, util=89.71% 00:37:03.560 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:37:03.560 [global] 00:37:03.560 thread=1 00:37:03.560 invalidate=1 00:37:03.560 rw=randwrite 00:37:03.560 time_based=1 00:37:03.560 runtime=1 00:37:03.560 ioengine=libaio 00:37:03.560 direct=1 00:37:03.560 bs=4096 00:37:03.560 iodepth=128 00:37:03.560 norandommap=0 00:37:03.560 numjobs=1 00:37:03.560 00:37:03.560 verify_dump=1 00:37:03.560 verify_backlog=512 00:37:03.560 verify_state_save=0 00:37:03.560 do_verify=1 00:37:03.560 verify=crc32c-intel 00:37:03.560 [job0] 00:37:03.560 filename=/dev/nvme0n1 00:37:03.560 [job1] 00:37:03.560 filename=/dev/nvme0n2 00:37:03.560 [job2] 00:37:03.560 filename=/dev/nvme0n3 00:37:03.561 [job3] 00:37:03.561 filename=/dev/nvme0n4 00:37:03.561 Could not set queue depth (nvme0n1) 00:37:03.561 Could not set queue depth (nvme0n2) 00:37:03.561 Could not set queue depth (nvme0n3) 00:37:03.561 Could not set queue depth (nvme0n4) 00:37:03.561 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:03.561 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:03.561 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:03.561 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:03.561 fio-3.35 00:37:03.561 Starting 4 threads 00:37:04.939 00:37:04.939 job0: (groupid=0, jobs=1): err= 0: pid=117320: Thu Dec 5 10:08:52 2024 00:37:04.939 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:37:04.939 slat (usec): min=5, max=5945, avg=121.26, stdev=645.18 00:37:04.939 clat (usec): min=5843, max=24867, avg=15542.39, stdev=1806.55 00:37:04.939 lat (usec): min=5856, max=24904, avg=15663.65, stdev=1895.75 00:37:04.939 clat percentiles (usec): 00:37:04.939 | 1.00th=[ 6587], 5.00th=[13435], 10.00th=[14091], 20.00th=[14615], 00:37:04.939 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15270], 60.00th=[15533], 00:37:04.939 | 70.00th=[16057], 80.00th=[16581], 90.00th=[17695], 95.00th=[18744], 00:37:04.939 | 99.00th=[19792], 99.50th=[20055], 99.90th=[23200], 99.95th=[23462], 00:37:04.939 | 99.99th=[24773] 00:37:04.939 write: IOPS=4084, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:37:04.939 slat (usec): min=11, max=5287, avg=115.12, stdev=538.69 00:37:04.939 clat (usec): min=550, max=24154, avg=15356.92, stdev=1771.59 00:37:04.939 lat (usec): min=5837, max=24188, avg=15472.04, stdev=1732.44 00:37:04.939 clat percentiles (usec): 00:37:04.939 | 1.00th=[10552], 5.00th=[11469], 10.00th=[12649], 20.00th=[14484], 00:37:04.939 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15533], 60.00th=[15795], 00:37:04.939 | 70.00th=[16057], 80.00th=[16712], 90.00th=[17171], 95.00th=[17957], 00:37:04.939 | 99.00th=[19530], 99.50th=[20055], 99.90th=[21103], 99.95th=[21103], 00:37:04.939 | 99.99th=[24249] 00:37:04.939 bw ( KiB/s): min=16384, max=16384, per=35.07%, avg=16384.00, stdev= 0.00, samples=2 00:37:04.939 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:37:04.939 lat (usec) : 750=0.01% 00:37:04.939 lat (msec) : 10=0.51%, 20=98.91%, 50=0.56% 00:37:04.939 cpu : usr=3.99%, sys=11.58%, ctx=365, majf=0, minf=15 00:37:04.939 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:37:04.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:04.939 issued rwts: total=4096,4097,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:04.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:04.939 job1: (groupid=0, jobs=1): err= 0: pid=117321: Thu Dec 5 10:08:52 2024 00:37:04.939 read: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec) 00:37:04.940 slat (usec): min=6, max=36755, avg=293.04, stdev=2059.87 00:37:04.940 clat (msec): min=19, max=127, avg=41.29, stdev=28.50 00:37:04.940 lat (msec): min=19, max=127, avg=41.59, stdev=28.71 00:37:04.940 clat percentiles (msec): 00:37:04.940 | 1.00th=[ 20], 5.00th=[ 20], 10.00th=[ 21], 20.00th=[ 21], 00:37:04.940 | 30.00th=[ 22], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 28], 00:37:04.940 | 70.00th=[ 57], 80.00th=[ 77], 90.00th=[ 88], 95.00th=[ 99], 00:37:04.940 | 99.00th=[ 112], 99.50th=[ 128], 99.90th=[ 128], 99.95th=[ 128], 00:37:04.940 | 99.99th=[ 128] 00:37:04.940 write: IOPS=1807, BW=7229KiB/s (7402kB/s)(7272KiB/1006msec); 0 zone resets 00:37:04.940 slat (usec): min=16, max=37881, avg=294.66, stdev=2061.80 00:37:04.940 clat (msec): min=4, max=106, avg=33.76, stdev=20.77 00:37:04.940 lat (msec): min=5, max=106, avg=34.05, stdev=20.99 00:37:04.940 clat percentiles (msec): 00:37:04.940 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 20], 20.00th=[ 21], 00:37:04.940 | 30.00th=[ 21], 40.00th=[ 22], 50.00th=[ 23], 60.00th=[ 26], 00:37:04.940 | 70.00th=[ 32], 80.00th=[ 56], 90.00th=[ 68], 95.00th=[ 83], 00:37:04.940 | 99.00th=[ 85], 99.50th=[ 85], 99.90th=[ 104], 99.95th=[ 107], 00:37:04.940 | 99.99th=[ 107] 00:37:04.940 bw ( KiB/s): min= 5328, max= 8208, per=14.49%, avg=6768.00, stdev=2036.47, samples=2 00:37:04.940 iops : min= 1332, max= 2052, avg=1692.00, stdev=509.12, samples=2 00:37:04.940 lat (msec) : 10=0.98%, 20=13.42%, 50=58.11%, 100=25.55%, 250=1.94% 00:37:04.940 cpu : usr=2.19%, sys=4.98%, ctx=118, majf=0, minf=15 00:37:04.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:37:04.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:04.940 issued rwts: total=1536,1818,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:04.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:04.940 job2: (groupid=0, jobs=1): err= 0: pid=117322: Thu Dec 5 10:08:52 2024 00:37:04.940 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:37:04.940 slat (usec): min=10, max=6865, avg=149.43, stdev=815.89 00:37:04.940 clat (usec): min=14680, max=28636, avg=20031.21, stdev=1614.91 00:37:04.940 lat (usec): min=14702, max=28689, avg=20180.65, stdev=1746.49 00:37:04.940 clat percentiles (usec): 00:37:04.940 | 1.00th=[15533], 5.00th=[17433], 10.00th=[18220], 20.00th=[18744], 00:37:04.940 | 30.00th=[19530], 40.00th=[19792], 50.00th=[20055], 60.00th=[20317], 00:37:04.940 | 70.00th=[20579], 80.00th=[21103], 90.00th=[21627], 95.00th=[22676], 00:37:04.940 | 99.00th=[25560], 99.50th=[26084], 99.90th=[27395], 99.95th=[27919], 00:37:04.940 | 99.99th=[28705] 00:37:04.940 write: IOPS=3263, BW=12.7MiB/s (13.4MB/s)(12.8MiB/1007msec); 0 zone resets 00:37:04.940 slat (usec): min=13, max=6637, avg=157.70, stdev=789.13 00:37:04.940 clat (usec): min=2012, max=28410, avg=19910.97, stdev=2203.66 00:37:04.940 lat (usec): min=6949, max=28457, avg=20068.66, stdev=2272.17 00:37:04.940 clat percentiles (usec): 00:37:04.940 | 1.00th=[ 7767], 5.00th=[15533], 10.00th=[18482], 20.00th=[19268], 00:37:04.940 | 30.00th=[19530], 40.00th=[19792], 50.00th=[20055], 60.00th=[20317], 00:37:04.940 | 70.00th=[20841], 80.00th=[21103], 90.00th=[21627], 95.00th=[22152], 00:37:04.940 | 99.00th=[25035], 99.50th=[25822], 99.90th=[27132], 99.95th=[27919], 00:37:04.940 | 99.99th=[28443] 00:37:04.940 bw ( KiB/s): min=12336, max=12952, per=27.07%, avg=12644.00, stdev=435.58, samples=2 00:37:04.940 iops : min= 3084, max= 3238, avg=3161.00, stdev=108.89, samples=2 00:37:04.940 lat (msec) : 4=0.02%, 10=0.66%, 20=45.31%, 50=54.01% 00:37:04.940 cpu : usr=2.98%, sys=9.94%, ctx=242, majf=0, minf=11 00:37:04.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:37:04.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:04.940 issued rwts: total=3072,3286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:04.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:04.940 job3: (groupid=0, jobs=1): err= 0: pid=117323: Thu Dec 5 10:08:52 2024 00:37:04.940 read: IOPS=2329, BW=9317KiB/s (9541kB/s)(9336KiB/1002msec) 00:37:04.940 slat (usec): min=5, max=15920, avg=173.40, stdev=1170.90 00:37:04.940 clat (usec): min=1074, max=68642, avg=22254.05, stdev=7493.88 00:37:04.940 lat (usec): min=1101, max=68658, avg=22427.45, stdev=7581.73 00:37:04.940 clat percentiles (usec): 00:37:04.940 | 1.00th=[10683], 5.00th=[12649], 10.00th=[16319], 20.00th=[18220], 00:37:04.940 | 30.00th=[18482], 40.00th=[20055], 50.00th=[21365], 60.00th=[22414], 00:37:04.940 | 70.00th=[23200], 80.00th=[25035], 90.00th=[30016], 95.00th=[33162], 00:37:04.940 | 99.00th=[56361], 99.50th=[63701], 99.90th=[68682], 99.95th=[68682], 00:37:04.940 | 99.99th=[68682] 00:37:04.940 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:37:04.940 slat (usec): min=5, max=19018, avg=215.80, stdev=1309.90 00:37:04.940 clat (usec): min=636, max=70414, avg=29294.52, stdev=16778.96 00:37:04.940 lat (usec): min=891, max=70458, avg=29510.32, stdev=16903.39 00:37:04.940 clat percentiles (usec): 00:37:04.940 | 1.00th=[ 3425], 5.00th=[ 7177], 10.00th=[15139], 20.00th=[18220], 00:37:04.940 | 30.00th=[20055], 40.00th=[21103], 50.00th=[21365], 60.00th=[22152], 00:37:04.940 | 70.00th=[31589], 80.00th=[53740], 90.00th=[55313], 95.00th=[57410], 00:37:04.940 | 99.00th=[64750], 99.50th=[65274], 99.90th=[65274], 99.95th=[68682], 00:37:04.940 | 99.99th=[70779] 00:37:04.940 bw ( KiB/s): min= 8625, max=11872, per=21.94%, avg=10248.50, stdev=2295.98, samples=2 00:37:04.940 iops : min= 2156, max= 2968, avg=2562.00, stdev=574.17, samples=2 00:37:04.940 lat (usec) : 750=0.02%, 1000=0.06% 00:37:04.940 lat (msec) : 2=0.14%, 4=0.55%, 10=2.10%, 20=32.35%, 50=51.10% 00:37:04.940 lat (msec) : 100=13.67% 00:37:04.940 cpu : usr=1.60%, sys=7.49%, ctx=197, majf=0, minf=11 00:37:04.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:37:04.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:04.940 issued rwts: total=2334,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:04.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:04.940 00:37:04.940 Run status group 0 (all jobs): 00:37:04.940 READ: bw=42.8MiB/s (44.9MB/s), 6107KiB/s-16.0MiB/s (6254kB/s-16.7MB/s), io=43.1MiB (45.2MB), run=1002-1007msec 00:37:04.940 WRITE: bw=45.6MiB/s (47.8MB/s), 7229KiB/s-16.0MiB/s (7402kB/s-16.7MB/s), io=45.9MiB (48.2MB), run=1002-1007msec 00:37:04.940 00:37:04.940 Disk stats (read/write): 00:37:04.940 nvme0n1: ios=3587/3584, merge=0/0, ticks=16592/16156, in_queue=32748, util=89.28% 00:37:04.940 nvme0n2: ios=1073/1536, merge=0/0, ticks=15962/18456, in_queue=34418, util=89.77% 00:37:04.940 nvme0n3: ios=2581/2892, merge=0/0, ticks=15820/17294, in_queue=33114, util=89.78% 00:37:04.940 nvme0n4: ios=2054/2159, merge=0/0, ticks=45068/57201, in_queue=102269, util=89.72% 00:37:04.940 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:37:04.940 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=117332 00:37:04.940 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:37:04.940 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:37:04.940 [global] 00:37:04.940 thread=1 00:37:04.940 invalidate=1 00:37:04.940 rw=read 00:37:04.940 time_based=1 00:37:04.940 runtime=10 00:37:04.940 ioengine=libaio 00:37:04.940 direct=1 00:37:04.940 bs=4096 00:37:04.940 iodepth=1 00:37:04.940 norandommap=1 00:37:04.940 numjobs=1 00:37:04.940 00:37:04.940 [job0] 00:37:04.940 filename=/dev/nvme0n1 00:37:04.940 [job1] 00:37:04.940 filename=/dev/nvme0n2 00:37:04.940 [job2] 00:37:04.940 filename=/dev/nvme0n3 00:37:04.940 [job3] 00:37:04.940 filename=/dev/nvme0n4 00:37:04.940 Could not set queue depth (nvme0n1) 00:37:04.940 Could not set queue depth (nvme0n2) 00:37:04.940 Could not set queue depth (nvme0n3) 00:37:04.940 Could not set queue depth (nvme0n4) 00:37:05.199 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:05.199 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:05.199 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:05.199 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:05.199 fio-3.35 00:37:05.199 Starting 4 threads 00:37:08.486 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:37:08.486 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=26685440, buflen=4096 00:37:08.486 fio: pid=117379, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:08.486 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:37:08.486 fio: pid=117378, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:08.486 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=43720704, buflen=4096 00:37:08.486 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:08.486 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:37:08.486 fio: pid=117376, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:08.486 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=31645696, buflen=4096 00:37:08.743 10:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:08.743 10:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:37:09.002 fio: pid=117377, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:09.002 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=48250880, buflen=4096 00:37:09.002 00:37:09.002 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=117376: Thu Dec 5 10:08:56 2024 00:37:09.002 read: IOPS=2311, BW=9244KiB/s (9466kB/s)(30.2MiB/3343msec) 00:37:09.002 slat (usec): min=6, max=16866, avg=20.21, stdev=248.52 00:37:09.002 clat (usec): min=182, max=3162, avg=410.88, stdev=82.07 00:37:09.002 lat (usec): min=193, max=17173, avg=431.09, stdev=261.43 00:37:09.002 clat percentiles (usec): 00:37:09.002 | 1.00th=[ 204], 5.00th=[ 269], 10.00th=[ 310], 20.00th=[ 375], 00:37:09.002 | 30.00th=[ 396], 40.00th=[ 408], 50.00th=[ 416], 60.00th=[ 429], 00:37:09.002 | 70.00th=[ 437], 80.00th=[ 453], 90.00th=[ 482], 95.00th=[ 519], 00:37:09.002 | 99.00th=[ 603], 99.50th=[ 635], 99.90th=[ 955], 99.95th=[ 1549], 00:37:09.002 | 99.99th=[ 3163] 00:37:09.002 bw ( KiB/s): min= 8680, max= 9304, per=22.83%, avg=9040.00, stdev=222.85, samples=6 00:37:09.002 iops : min= 2170, max= 2326, avg=2260.00, stdev=55.71, samples=6 00:37:09.002 lat (usec) : 250=3.24%, 500=90.20%, 750=6.38%, 1000=0.08% 00:37:09.002 lat (msec) : 2=0.08%, 4=0.01% 00:37:09.002 cpu : usr=0.57%, sys=3.02%, ctx=7746, majf=0, minf=1 00:37:09.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.002 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.002 issued rwts: total=7727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:09.002 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=117377: Thu Dec 5 10:08:56 2024 00:37:09.002 read: IOPS=3178, BW=12.4MiB/s (13.0MB/s)(46.0MiB/3707msec) 00:37:09.002 slat (usec): min=7, max=12736, avg=20.14, stdev=217.68 00:37:09.002 clat (usec): min=169, max=11464, avg=293.10, stdev=129.40 00:37:09.002 lat (usec): min=183, max=12971, avg=313.24, stdev=253.24 00:37:09.002 clat percentiles (usec): 00:37:09.002 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 204], 20.00th=[ 262], 00:37:09.002 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 297], 00:37:09.002 | 70.00th=[ 310], 80.00th=[ 326], 90.00th=[ 347], 95.00th=[ 371], 00:37:09.002 | 99.00th=[ 494], 99.50th=[ 523], 99.90th=[ 1020], 99.95th=[ 1467], 00:37:09.002 | 99.99th=[ 3195] 00:37:09.002 bw ( KiB/s): min=10600, max=13986, per=31.72%, avg=12558.00, stdev=1088.41, samples=7 00:37:09.002 iops : min= 2650, max= 3496, avg=3139.43, stdev=271.99, samples=7 00:37:09.002 lat (usec) : 250=15.53%, 500=83.63%, 750=0.65%, 1000=0.06% 00:37:09.002 lat (msec) : 2=0.07%, 4=0.03%, 20=0.01% 00:37:09.002 cpu : usr=0.86%, sys=3.97%, ctx=11800, majf=0, minf=1 00:37:09.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.002 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.002 issued rwts: total=11781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:09.002 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=117378: Thu Dec 5 10:08:56 2024 00:37:09.002 read: IOPS=3422, BW=13.4MiB/s (14.0MB/s)(41.7MiB/3119msec) 00:37:09.002 slat (usec): min=11, max=10537, avg=16.31, stdev=126.21 00:37:09.002 clat (usec): min=209, max=28590, avg=274.50, stdev=276.95 00:37:09.002 lat (usec): min=223, max=28617, avg=290.81, stdev=304.52 00:37:09.002 clat percentiles (usec): 00:37:09.002 | 1.00th=[ 225], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 251], 00:37:09.002 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:37:09.002 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 318], 00:37:09.002 | 99.00th=[ 343], 99.50th=[ 359], 99.90th=[ 506], 99.95th=[ 857], 00:37:09.002 | 99.99th=[ 2343] 00:37:09.002 bw ( KiB/s): min=12464, max=14224, per=34.58%, avg=13692.00, stdev=671.69, samples=6 00:37:09.002 iops : min= 3116, max= 3556, avg=3423.00, stdev=167.92, samples=6 00:37:09.002 lat (usec) : 250=19.14%, 500=80.75%, 750=0.05%, 1000=0.01% 00:37:09.002 lat (msec) : 2=0.02%, 4=0.02%, 50=0.01% 00:37:09.002 cpu : usr=0.90%, sys=3.94%, ctx=10678, majf=0, minf=2 00:37:09.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.002 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.002 issued rwts: total=10675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:09.002 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=117379: Thu Dec 5 10:08:56 2024 00:37:09.002 read: IOPS=2250, BW=8999KiB/s (9215kB/s)(25.4MiB/2896msec) 00:37:09.002 slat (nsec): min=7209, max=68547, avg=14336.81, stdev=4419.83 00:37:09.002 clat (usec): min=224, max=1942, avg=428.52, stdev=60.18 00:37:09.002 lat (usec): min=236, max=1957, avg=442.86, stdev=60.89 00:37:09.002 clat percentiles (usec): 00:37:09.002 | 1.00th=[ 269], 5.00th=[ 363], 10.00th=[ 379], 20.00th=[ 396], 00:37:09.002 | 30.00th=[ 408], 40.00th=[ 416], 50.00th=[ 424], 60.00th=[ 433], 00:37:09.002 | 70.00th=[ 445], 80.00th=[ 461], 90.00th=[ 486], 95.00th=[ 519], 00:37:09.002 | 99.00th=[ 586], 99.50th=[ 611], 99.90th=[ 791], 99.95th=[ 1020], 00:37:09.002 | 99.99th=[ 1942] 00:37:09.002 bw ( KiB/s): min= 8688, max= 9304, per=22.84%, avg=9043.20, stdev=248.30, samples=5 00:37:09.002 iops : min= 2172, max= 2326, avg=2260.80, stdev=62.07, samples=5 00:37:09.002 lat (usec) : 250=0.38%, 500=92.13%, 750=7.37%, 1000=0.05% 00:37:09.002 lat (msec) : 2=0.06% 00:37:09.002 cpu : usr=0.55%, sys=2.83%, ctx=6525, majf=0, minf=2 00:37:09.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.002 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.002 issued rwts: total=6516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:09.002 00:37:09.002 Run status group 0 (all jobs): 00:37:09.002 READ: bw=38.7MiB/s (40.5MB/s), 8999KiB/s-13.4MiB/s (9215kB/s-14.0MB/s), io=143MiB (150MB), run=2896-3707msec 00:37:09.002 00:37:09.002 Disk stats (read/write): 00:37:09.002 nvme0n1: ios=7076/0, merge=0/0, ticks=2939/0, in_queue=2939, util=95.56% 00:37:09.002 nvme0n2: ios=11372/0, merge=0/0, ticks=3383/0, in_queue=3383, util=95.31% 00:37:09.002 nvme0n3: ios=10668/0, merge=0/0, ticks=2965/0, in_queue=2965, util=96.30% 00:37:09.002 nvme0n4: ios=6459/0, merge=0/0, ticks=2719/0, in_queue=2719, util=96.79% 00:37:09.002 10:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:09.002 10:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:37:09.568 10:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:09.568 10:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:37:09.825 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:09.825 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:37:10.392 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:10.392 10:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:37:10.650 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:10.650 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:37:10.909 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:37:10.909 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 117332 00:37:10.909 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:37:10.909 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:10.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:11.168 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:11.168 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:37:11.168 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:37:11.168 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:11.168 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:37:11.168 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:11.168 nvmf hotplug test: fio failed as expected 00:37:11.168 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:37:11.168 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:37:11.168 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:37:11.168 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:11.427 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:37:11.427 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:37:11.427 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:37:11.427 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:37:11.427 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:37:11.427 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:11.427 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:37:11.427 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:11.427 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:37:11.427 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:11.427 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:11.427 rmmod nvme_tcp 00:37:11.427 rmmod nvme_fabrics 00:37:11.427 rmmod nvme_keyring 00:37:11.427 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:11.427 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:37:11.427 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:37:11.427 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 116849 ']' 00:37:11.427 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 116849 00:37:11.428 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 116849 ']' 00:37:11.428 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 116849 00:37:11.428 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:37:11.428 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:11.428 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116849 00:37:11.428 killing process with pid 116849 00:37:11.428 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:11.428 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:11.428 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116849' 00:37:11.428 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 116849 00:37:11.428 10:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 116849 00:37:12.365 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:12.365 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:12.365 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:12.365 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:37:12.365 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:37:12.365 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:12.365 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:37:12.365 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:12.365 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:37:12.365 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:37:12.365 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:37:12.365 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:37:12.365 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:37:12.365 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:37:12.365 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:37:12.365 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:37:12.365 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:37:12.365 10:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:37:12.625 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:37:12.625 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:37:12.625 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:12.625 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:12.625 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:37:12.625 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:12.625 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:12.625 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:12.625 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:37:12.625 ************************************ 00:37:12.625 END TEST nvmf_fio_target 00:37:12.625 ************************************ 00:37:12.625 00:37:12.625 real 0m21.831s 00:37:12.625 user 1m3.102s 00:37:12.625 sys 0m10.507s 00:37:12.625 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:12.625 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:12.625 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:37:12.625 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:12.625 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:12.625 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:12.625 ************************************ 00:37:12.625 START TEST nvmf_bdevio 00:37:12.625 ************************************ 00:37:12.625 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:37:12.884 * Looking for test storage... 00:37:12.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:12.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.884 --rc genhtml_branch_coverage=1 00:37:12.884 --rc genhtml_function_coverage=1 00:37:12.884 --rc genhtml_legend=1 00:37:12.884 --rc geninfo_all_blocks=1 00:37:12.884 --rc geninfo_unexecuted_blocks=1 00:37:12.884 00:37:12.884 ' 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:12.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.884 --rc genhtml_branch_coverage=1 00:37:12.884 --rc genhtml_function_coverage=1 00:37:12.884 --rc genhtml_legend=1 00:37:12.884 --rc geninfo_all_blocks=1 00:37:12.884 --rc geninfo_unexecuted_blocks=1 00:37:12.884 00:37:12.884 ' 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:12.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.884 --rc genhtml_branch_coverage=1 00:37:12.884 --rc genhtml_function_coverage=1 00:37:12.884 --rc genhtml_legend=1 00:37:12.884 --rc geninfo_all_blocks=1 00:37:12.884 --rc geninfo_unexecuted_blocks=1 00:37:12.884 00:37:12.884 ' 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:12.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.884 --rc genhtml_branch_coverage=1 00:37:12.884 --rc genhtml_function_coverage=1 00:37:12.884 --rc genhtml_legend=1 00:37:12.884 --rc geninfo_all_blocks=1 00:37:12.884 --rc geninfo_unexecuted_blocks=1 00:37:12.884 00:37:12.884 ' 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:37:12.884 Cannot find device "nvmf_init_br" 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:37:12.884 Cannot find device "nvmf_init_br2" 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:37:12.884 Cannot find device "nvmf_tgt_br" 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:37:12.884 Cannot find device "nvmf_tgt_br2" 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:37:12.884 Cannot find device "nvmf_init_br" 00:37:12.884 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:37:12.885 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:37:12.885 Cannot find device "nvmf_init_br2" 00:37:13.142 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:37:13.142 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:37:13.142 Cannot find device "nvmf_tgt_br" 00:37:13.142 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:37:13.142 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:37:13.142 Cannot find device "nvmf_tgt_br2" 00:37:13.142 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:37:13.142 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:37:13.142 Cannot find device "nvmf_br" 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:37:13.143 Cannot find device "nvmf_init_if" 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:37:13.143 Cannot find device "nvmf_init_if2" 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:13.143 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:13.143 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:37:13.143 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:37:13.402 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:13.402 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:37:13.402 00:37:13.402 --- 10.0.0.3 ping statistics --- 00:37:13.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:13.402 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:37:13.402 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:37:13.402 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:37:13.402 00:37:13.402 --- 10.0.0.4 ping statistics --- 00:37:13.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:13.402 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:13.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:13.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:37:13.402 00:37:13.402 --- 10.0.0.1 ping statistics --- 00:37:13.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:13.402 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:37:13.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:13.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:37:13.402 00:37:13.402 --- 10.0.0.2 ping statistics --- 00:37:13.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:13.402 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=117771 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 117771 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 117771 ']' 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:13.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:13.402 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:13.402 [2024-12-05 10:09:00.964169] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:13.402 [2024-12-05 10:09:00.967428] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:37:13.402 [2024-12-05 10:09:00.967569] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:13.659 [2024-12-05 10:09:01.168298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:13.917 [2024-12-05 10:09:01.335207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:13.917 [2024-12-05 10:09:01.335286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:13.917 [2024-12-05 10:09:01.335307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:13.917 [2024-12-05 10:09:01.335326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:13.917 [2024-12-05 10:09:01.335340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:13.917 [2024-12-05 10:09:01.338427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:13.917 [2024-12-05 10:09:01.338544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:13.917 [2024-12-05 10:09:01.338636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:13.917 [2024-12-05 10:09:01.338651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:14.175 [2024-12-05 10:09:01.685356] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:14.176 [2024-12-05 10:09:01.686231] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:14.176 [2024-12-05 10:09:01.687725] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:14.176 [2024-12-05 10:09:01.688018] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:14.176 [2024-12-05 10:09:01.688671] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:14.435 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:14.435 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:37:14.435 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:14.435 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:14.435 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:14.435 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:14.435 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:14.435 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.435 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:14.435 [2024-12-05 10:09:01.896647] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:14.435 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.435 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:14.435 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.435 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:14.435 Malloc0 00:37:14.435 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.435 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:14.435 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.435 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:14.435 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.435 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:14.435 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.435 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:14.435 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.435 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:37:14.435 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.435 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:14.435 [2024-12-05 10:09:02.032853] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:14.435 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.435 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:37:14.435 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:37:14.435 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:37:14.435 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:37:14.435 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:14.435 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:14.435 { 00:37:14.435 "params": { 00:37:14.435 "name": "Nvme$subsystem", 00:37:14.435 "trtype": "$TEST_TRANSPORT", 00:37:14.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:14.435 "adrfam": "ipv4", 00:37:14.435 "trsvcid": "$NVMF_PORT", 00:37:14.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:14.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:14.435 "hdgst": ${hdgst:-false}, 00:37:14.435 "ddgst": ${ddgst:-false} 00:37:14.435 }, 00:37:14.435 "method": "bdev_nvme_attach_controller" 00:37:14.435 } 00:37:14.435 EOF 00:37:14.436 )") 00:37:14.436 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:37:14.436 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:37:14.436 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:37:14.436 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:14.436 "params": { 00:37:14.436 "name": "Nvme1", 00:37:14.436 "trtype": "tcp", 00:37:14.436 "traddr": "10.0.0.3", 00:37:14.436 "adrfam": "ipv4", 00:37:14.436 "trsvcid": "4420", 00:37:14.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:14.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:14.436 "hdgst": false, 00:37:14.436 "ddgst": false 00:37:14.436 }, 00:37:14.436 "method": "bdev_nvme_attach_controller" 00:37:14.436 }' 00:37:14.695 [2024-12-05 10:09:02.156822] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:37:14.695 [2024-12-05 10:09:02.157007] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117825 ] 00:37:14.955 [2024-12-05 10:09:02.348505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:14.955 [2024-12-05 10:09:02.509269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.955 [2024-12-05 10:09:02.509455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:14.955 [2024-12-05 10:09:02.509465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:15.578 I/O targets: 00:37:15.578 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:37:15.578 00:37:15.578 00:37:15.578 CUnit - A unit testing framework for C - Version 2.1-3 00:37:15.578 http://cunit.sourceforge.net/ 00:37:15.578 00:37:15.578 00:37:15.578 Suite: bdevio tests on: Nvme1n1 00:37:15.578 Test: blockdev write read block ...passed 00:37:15.578 Test: blockdev write zeroes read block ...passed 00:37:15.578 Test: blockdev write zeroes read no split ...passed 00:37:15.578 Test: blockdev write zeroes read split ...passed 00:37:15.578 Test: blockdev write zeroes read split partial ...passed 00:37:15.578 Test: blockdev reset ...[2024-12-05 10:09:03.062871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:37:15.578 [2024-12-05 10:09:03.063319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:37:15.578 [2024-12-05 10:09:03.070061] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:37:15.578 passed 00:37:15.578 Test: blockdev write read 8 blocks ...passed 00:37:15.578 Test: blockdev write read size > 128k ...passed 00:37:15.578 Test: blockdev write read invalid size ...passed 00:37:15.578 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:15.578 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:15.578 Test: blockdev write read max offset ...passed 00:37:15.843 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:15.843 Test: blockdev writev readv 8 blocks ...passed 00:37:15.843 Test: blockdev writev readv 30 x 1block ...passed 00:37:15.843 Test: blockdev writev readv block ...passed 00:37:15.843 Test: blockdev writev readv size > 128k ...passed 00:37:15.843 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:15.843 Test: blockdev comparev and writev ...[2024-12-05 10:09:03.249704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:15.843 [2024-12-05 10:09:03.249767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.843 [2024-12-05 10:09:03.249795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:15.843 [2024-12-05 10:09:03.249810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:15.843 [2024-12-05 10:09:03.250325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:15.843 [2024-12-05 10:09:03.250539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:15.843 [2024-12-05 10:09:03.250582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:15.843 [2024-12-05 10:09:03.250599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:15.843 [2024-12-05 10:09:03.251344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:15.843 [2024-12-05 10:09:03.251392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:15.843 [2024-12-05 10:09:03.251415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:15.843 [2024-12-05 10:09:03.251428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:15.843 [2024-12-05 10:09:03.251919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:15.843 [2024-12-05 10:09:03.251980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:15.843 [2024-12-05 10:09:03.252006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:15.843 [2024-12-05 10:09:03.252034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:15.843 passed 00:37:15.843 Test: blockdev nvme passthru rw ...passed 00:37:15.843 Test: blockdev nvme passthru vendor specific ...[2024-12-05 10:09:03.335485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:15.843 [2024-12-05 10:09:03.335537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:15.843 passed 00:37:15.843 Test: blockdev nvme admin passthru ...[2024-12-05 10:09:03.335754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:15.843 [2024-12-05 10:09:03.335782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:15.843 [2024-12-05 10:09:03.335989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:15.843 [2024-12-05 10:09:03.336013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:15.843 [2024-12-05 10:09:03.336205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:15.843 [2024-12-05 10:09:03.336227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:15.843 passed 00:37:15.843 Test: blockdev copy ...passed 00:37:15.843 00:37:15.843 Run Summary: Type Total Ran Passed Failed Inactive 00:37:15.843 suites 1 1 n/a 0 0 00:37:15.843 tests 23 23 23 0 0 00:37:15.843 asserts 152 152 152 0 n/a 00:37:15.843 00:37:15.843 Elapsed time = 1.015 seconds 00:37:16.778 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:16.778 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.778 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:16.778 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.778 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:37:16.778 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:37:16.778 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:16.778 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:37:17.036 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:17.036 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:37:17.036 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:17.036 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:17.036 rmmod nvme_tcp 00:37:17.036 rmmod nvme_fabrics 00:37:17.036 rmmod nvme_keyring 00:37:17.036 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:17.036 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:37:17.036 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:37:17.036 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 117771 ']' 00:37:17.036 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 117771 00:37:17.036 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 117771 ']' 00:37:17.036 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 117771 00:37:17.037 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:37:17.037 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:17.037 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117771 00:37:17.037 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:37:17.037 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:37:17.037 killing process with pid 117771 00:37:17.037 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117771' 00:37:17.037 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 117771 00:37:17.037 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 117771 00:37:18.027 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:18.027 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:18.027 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:18.027 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:37:18.027 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:37:18.027 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:18.027 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:37:18.027 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:18.027 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:37:18.027 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:37:18.027 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:37:18.285 00:37:18.285 real 0m5.669s 00:37:18.285 user 0m16.251s 00:37:18.285 sys 0m1.778s 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:18.285 ************************************ 00:37:18.285 END TEST nvmf_bdevio 00:37:18.285 ************************************ 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:37:18.285 00:37:18.285 real 3m57.120s 00:37:18.285 user 10m19.773s 00:37:18.285 sys 1m23.101s 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:18.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:18.285 ************************************ 00:37:18.285 END TEST nvmf_target_core_interrupt_mode 00:37:18.285 ************************************ 00:37:18.544 10:09:05 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:37:18.544 10:09:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:18.544 10:09:05 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:18.544 10:09:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:18.544 ************************************ 00:37:18.544 START TEST nvmf_interrupt 00:37:18.544 ************************************ 00:37:18.544 10:09:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:37:18.544 * Looking for test storage... 00:37:18.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:18.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:18.544 --rc genhtml_branch_coverage=1 00:37:18.544 --rc genhtml_function_coverage=1 00:37:18.544 --rc genhtml_legend=1 00:37:18.544 --rc geninfo_all_blocks=1 00:37:18.544 --rc geninfo_unexecuted_blocks=1 00:37:18.544 00:37:18.544 ' 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:18.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:18.544 --rc genhtml_branch_coverage=1 00:37:18.544 --rc genhtml_function_coverage=1 00:37:18.544 --rc genhtml_legend=1 00:37:18.544 --rc geninfo_all_blocks=1 00:37:18.544 --rc geninfo_unexecuted_blocks=1 00:37:18.544 00:37:18.544 ' 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:18.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:18.544 --rc genhtml_branch_coverage=1 00:37:18.544 --rc genhtml_function_coverage=1 00:37:18.544 --rc genhtml_legend=1 00:37:18.544 --rc geninfo_all_blocks=1 00:37:18.544 --rc geninfo_unexecuted_blocks=1 00:37:18.544 00:37:18.544 ' 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:18.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:18.544 --rc genhtml_branch_coverage=1 00:37:18.544 --rc genhtml_function_coverage=1 00:37:18.544 --rc genhtml_legend=1 00:37:18.544 --rc geninfo_all_blocks=1 00:37:18.544 --rc geninfo_unexecuted_blocks=1 00:37:18.544 00:37:18.544 ' 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.544 10:09:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@460 -- # nvmf_veth_init 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:18.545 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:37:18.804 Cannot find device "nvmf_init_br" 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:37:18.804 Cannot find device "nvmf_init_br2" 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:37:18.804 Cannot find device "nvmf_tgt_br" 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:37:18.804 Cannot find device "nvmf_tgt_br2" 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:37:18.804 Cannot find device "nvmf_init_br" 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:37:18.804 Cannot find device "nvmf_init_br2" 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:37:18.804 Cannot find device "nvmf_tgt_br" 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:37:18.804 Cannot find device "nvmf_tgt_br2" 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:37:18.804 Cannot find device "nvmf_br" 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:37:18.804 Cannot find device "nvmf_init_if" 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:37:18.804 Cannot find device "nvmf_init_if2" 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:18.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:18.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:37:18.804 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:37:19.063 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:19.063 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:37:19.063 00:37:19.063 --- 10.0.0.3 ping statistics --- 00:37:19.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:19.063 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:37:19.063 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:37:19.063 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:37:19.063 00:37:19.063 --- 10.0.0.4 ping statistics --- 00:37:19.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:19.063 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:19.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:19.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:37:19.063 00:37:19.063 --- 10.0.0.1 ping statistics --- 00:37:19.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:19.063 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:37:19.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:19.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:37:19.063 00:37:19.063 --- 10.0.0.2 ping statistics --- 00:37:19.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:19.063 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@461 -- # return 0 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=118117 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 118117 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 118117 ']' 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:19.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:19.063 10:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:19.063 [2024-12-05 10:09:06.686272] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:19.063 [2024-12-05 10:09:06.689485] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:37:19.063 [2024-12-05 10:09:06.689625] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:19.321 [2024-12-05 10:09:06.886992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:19.579 [2024-12-05 10:09:07.037902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:19.579 [2024-12-05 10:09:07.038024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:19.579 [2024-12-05 10:09:07.038059] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:19.579 [2024-12-05 10:09:07.038079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:19.579 [2024-12-05 10:09:07.038093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:19.579 [2024-12-05 10:09:07.040589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:19.579 [2024-12-05 10:09:07.040597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:19.838 [2024-12-05 10:09:07.419277] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:19.838 [2024-12-05 10:09:07.419434] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:19.838 [2024-12-05 10:09:07.419820] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:20.097 10:09:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:20.097 10:09:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:37:20.097 10:09:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:20.097 10:09:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:20.097 10:09:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:20.097 10:09:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:20.097 10:09:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:37:20.097 10:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:37:20.097 10:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:37:20.097 10:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:37:20.355 5000+0 records in 00:37:20.355 5000+0 records out 00:37:20.355 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0320578 s, 319 MB/s 00:37:20.355 10:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:37:20.355 10:09:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.355 10:09:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:20.355 AIO0 00:37:20.355 10:09:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.355 10:09:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:37:20.355 10:09:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.355 10:09:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:20.355 [2024-12-05 10:09:07.806273] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:20.355 10:09:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.355 10:09:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:20.355 10:09:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.355 10:09:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:20.355 10:09:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.355 10:09:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:20.356 [2024-12-05 10:09:07.854852] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 118117 0 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 118117 0 idle 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=118117 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 118117 -w 256 00:37:20.356 10:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 118117 root 20 0 20.1t 181636 103424 S 0.0 1.5 0:00.78 reactor_0' 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 118117 root 20 0 20.1t 181636 103424 S 0.0 1.5 0:00.78 reactor_0 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 118117 1 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 118117 1 idle 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=118117 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 118117 -w 256 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 118131 root 20 0 20.1t 181636 103424 S 0.0 1.5 0:00.00 reactor_1' 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 118131 root 20 0 20.1t 181636 103424 S 0.0 1.5 0:00.00 reactor_1 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=118188 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 118117 0 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 118117 0 busy 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=118117 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 118117 -w 256 00:37:20.615 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:20.874 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 118117 root 20 0 20.1t 181636 103424 S 6.7 1.5 0:00.80 reactor_0' 00:37:20.874 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 118117 root 20 0 20.1t 181636 103424 S 6.7 1.5 0:00.80 reactor_0 00:37:20.874 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:20.874 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:20.874 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:37:20.874 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:37:20.874 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:20.874 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:20.874 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:37:21.811 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:37:21.811 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:21.811 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 118117 -w 256 00:37:21.811 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 118117 root 20 0 20.1t 194052 104576 R 99.9 1.6 0:02.15 reactor_0' 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 118117 root 20 0 20.1t 194052 104576 R 99.9 1.6 0:02.15 reactor_0 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 118117 1 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 118117 1 busy 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=118117 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:22.071 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 118117 -w 256 00:37:22.331 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 118131 root 20 0 20.1t 194308 104576 R 62.5 1.6 0:00.81 reactor_1' 00:37:22.331 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 118131 root 20 0 20.1t 194308 104576 R 62.5 1.6 0:00.81 reactor_1 00:37:22.331 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:22.331 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:22.331 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=62.5 00:37:22.331 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=62 00:37:22.331 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:22.331 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:22.331 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:22.331 10:09:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:22.331 10:09:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 118188 00:37:32.319 Initializing NVMe Controllers 00:37:32.319 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:37:32.319 Controller IO queue size 256, less than required. 00:37:32.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:32.319 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:32.319 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:32.319 Initialization complete. Launching workers. 00:37:32.319 ======================================================== 00:37:32.319 Latency(us) 00:37:32.319 Device Information : IOPS MiB/s Average min max 00:37:32.319 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 4197.10 16.39 61151.29 11389.07 151416.55 00:37:32.319 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 4633.90 18.10 55346.43 12107.05 148038.20 00:37:32.319 ======================================================== 00:37:32.319 Total : 8831.00 34.50 58105.30 11389.07 151416.55 00:37:32.319 00:37:32.319 10:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:32.319 10:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 118117 0 00:37:32.319 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 118117 0 idle 00:37:32.319 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=118117 00:37:32.319 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:32.319 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:32.319 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:32.319 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:32.319 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:32.319 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:32.319 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:32.319 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:32.319 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:32.319 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 118117 -w 256 00:37:32.319 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 118117 root 20 0 20.1t 195460 104576 S 0.0 1.6 0:15.50 reactor_0' 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 118117 root 20 0 20.1t 195460 104576 S 0.0 1.6 0:15.50 reactor_0 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 118117 1 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 118117 1 idle 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=118117 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 118117 -w 256 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 118131 root 20 0 20.1t 195460 104576 S 0.0 1.6 0:07.39 reactor_1' 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 118131 root 20 0 20.1t 195460 104576 S 0.0 1.6 0:07.39 reactor_1 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:32.320 10:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:37:32.320 10:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:37:32.320 10:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:37:32.320 10:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:37:32.320 10:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:37:32.320 10:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 118117 0 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 118117 0 idle 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=118117 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 118117 -w 256 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 118117 root 20 0 20.1t 201092 106240 S 6.7 1.6 0:15.60 reactor_0' 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 118117 root 20 0 20.1t 201092 106240 S 6.7 1.6 0:15.60 reactor_0 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 118117 1 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 118117 1 idle 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=118117 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 118117 -w 256 00:37:33.699 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:33.966 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 118131 root 20 0 20.1t 201092 106240 S 0.0 1.6 0:07.42 reactor_1' 00:37:33.966 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 118131 root 20 0 20.1t 201092 106240 S 0.0 1.6 0:07.42 reactor_1 00:37:33.966 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:33.966 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:33.966 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:33.966 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:33.966 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:33.966 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:33.966 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:33.966 10:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:33.966 10:09:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:33.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:33.966 10:09:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:33.966 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:37:33.966 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:37:33.966 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:33.966 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:33.966 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:34.226 rmmod nvme_tcp 00:37:34.226 rmmod nvme_fabrics 00:37:34.226 rmmod nvme_keyring 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 118117 ']' 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 118117 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 118117 ']' 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 118117 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118117 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:34.226 killing process with pid 118117 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118117' 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 118117 00:37:34.226 10:09:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 118117 00:37:35.604 10:09:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:35.604 10:09:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:35.604 10:09:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:35.604 10:09:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:37:35.604 10:09:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:37:35.604 10:09:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:35.604 10:09:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:37:35.604 10:09:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:35.604 10:09:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:37:35.604 10:09:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:37:35.604 10:09:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:37:35.604 10:09:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:37:35.604 10:09:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:37:35.604 10:09:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:37:35.604 10:09:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:37:35.604 10:09:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:37:35.604 10:09:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:37:35.604 10:09:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:37:35.604 10:09:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:37:35.604 10:09:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:37:35.604 10:09:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:35.604 10:09:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:35.604 10:09:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 00:37:35.604 10:09:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:35.604 10:09:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:35.604 10:09:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:35.604 10:09:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 00:37:35.604 ************************************ 00:37:35.604 END TEST nvmf_interrupt 00:37:35.604 ************************************ 00:37:35.604 00:37:35.604 real 0m17.257s 00:37:35.604 user 0m30.731s 00:37:35.604 sys 0m8.126s 00:37:35.604 10:09:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:35.604 10:09:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:35.863 ************************************ 00:37:35.863 END TEST nvmf_tcp 00:37:35.863 ************************************ 00:37:35.863 00:37:35.863 real 30m16.765s 00:37:35.863 user 87m20.147s 00:37:35.863 sys 6m14.579s 00:37:35.863 10:09:23 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:35.863 10:09:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:35.863 10:09:23 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:37:35.863 10:09:23 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:35.863 10:09:23 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:35.863 10:09:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:35.863 10:09:23 -- common/autotest_common.sh@10 -- # set +x 00:37:35.863 ************************************ 00:37:35.863 START TEST spdkcli_nvmf_tcp 00:37:35.863 ************************************ 00:37:35.863 10:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:35.863 * Looking for test storage... 00:37:35.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:37:35.863 10:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:35.863 10:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:37:35.863 10:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:36.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.123 --rc genhtml_branch_coverage=1 00:37:36.123 --rc genhtml_function_coverage=1 00:37:36.123 --rc genhtml_legend=1 00:37:36.123 --rc geninfo_all_blocks=1 00:37:36.123 --rc geninfo_unexecuted_blocks=1 00:37:36.123 00:37:36.123 ' 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:36.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.123 --rc genhtml_branch_coverage=1 00:37:36.123 --rc genhtml_function_coverage=1 00:37:36.123 --rc genhtml_legend=1 00:37:36.123 --rc geninfo_all_blocks=1 00:37:36.123 --rc geninfo_unexecuted_blocks=1 00:37:36.123 00:37:36.123 ' 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:36.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.123 --rc genhtml_branch_coverage=1 00:37:36.123 --rc genhtml_function_coverage=1 00:37:36.123 --rc genhtml_legend=1 00:37:36.123 --rc geninfo_all_blocks=1 00:37:36.123 --rc geninfo_unexecuted_blocks=1 00:37:36.123 00:37:36.123 ' 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:36.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.123 --rc genhtml_branch_coverage=1 00:37:36.123 --rc genhtml_function_coverage=1 00:37:36.123 --rc genhtml_legend=1 00:37:36.123 --rc geninfo_all_blocks=1 00:37:36.123 --rc geninfo_unexecuted_blocks=1 00:37:36.123 00:37:36.123 ' 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:37:36.123 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:36.124 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:36.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=118530 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 118530 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 118530 ']' 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:36.124 10:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:36.124 [2024-12-05 10:09:23.700079] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:37:36.124 [2024-12-05 10:09:23.700283] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118530 ] 00:37:36.383 [2024-12-05 10:09:23.895255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:36.641 [2024-12-05 10:09:24.044598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:36.641 [2024-12-05 10:09:24.044604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:37.210 10:09:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:37.210 10:09:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:37:37.210 10:09:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:37.210 10:09:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:37.210 10:09:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:37.210 10:09:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:37.210 10:09:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:37.210 10:09:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:37.210 10:09:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:37.210 10:09:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:37.210 10:09:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:37.210 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:37.210 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:37.210 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:37.210 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:37.210 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:37.210 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:37.210 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:37.210 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:37.210 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:37.210 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:37.210 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:37.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:37.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:37.211 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:37.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:37.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:37.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:37.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:37.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:37.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:37.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:37.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:37.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:37.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:37.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:37.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:37.211 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:37.211 ' 00:37:40.499 [2024-12-05 10:09:27.573585] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:41.457 [2024-12-05 10:09:28.896532] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:43.991 [2024-12-05 10:09:31.363593] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:45.897 [2024-12-05 10:09:33.486091] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:47.801 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:47.801 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:47.801 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:47.801 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:47.801 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:47.801 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:47.801 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:47.801 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:47.801 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:47.801 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:47.801 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:47.801 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:47.801 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:47.801 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:47.801 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:47.801 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:47.801 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:47.801 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:47.801 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:47.801 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:47.801 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:47.801 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:47.801 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:47.801 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:47.801 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:47.801 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:47.801 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:47.801 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:47.801 10:09:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:47.801 10:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:47.801 10:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:47.801 10:09:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:47.801 10:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:47.801 10:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:47.801 10:09:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:47.801 10:09:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:37:48.368 10:09:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:48.368 10:09:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:48.368 10:09:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:48.368 10:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:48.368 10:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:48.368 10:09:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:48.368 10:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:48.368 10:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:48.368 10:09:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:48.368 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:48.368 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:48.368 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:48.368 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:48.368 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:48.368 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:48.368 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:48.368 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:48.368 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:48.368 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:48.368 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:48.368 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:48.368 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:48.368 ' 00:37:54.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:54.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:54.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:54.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:54.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:54.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:54.944 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:54.944 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:54.944 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:54.944 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:54.944 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:54.944 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:54.944 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:54.944 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:54.944 10:09:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:54.944 10:09:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:54.944 10:09:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:54.944 10:09:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 118530 00:37:54.944 10:09:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 118530 ']' 00:37:54.944 10:09:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 118530 00:37:54.944 10:09:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:37:54.944 10:09:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:54.944 10:09:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118530 00:37:54.944 10:09:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:54.944 10:09:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:54.944 10:09:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118530' 00:37:54.944 killing process with pid 118530 00:37:54.944 10:09:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 118530 00:37:54.944 10:09:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 118530 00:37:55.208 10:09:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:55.208 10:09:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:55.208 Process with pid 118530 is not found 00:37:55.208 10:09:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 118530 ']' 00:37:55.208 10:09:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 118530 00:37:55.208 10:09:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 118530 ']' 00:37:55.208 10:09:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 118530 00:37:55.208 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (118530) - No such process 00:37:55.208 10:09:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 118530 is not found' 00:37:55.208 10:09:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:55.208 10:09:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:55.208 10:09:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:55.208 ************************************ 00:37:55.208 END TEST spdkcli_nvmf_tcp 00:37:55.208 ************************************ 00:37:55.208 00:37:55.208 real 0m19.498s 00:37:55.208 user 0m41.427s 00:37:55.208 sys 0m1.241s 00:37:55.208 10:09:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:55.208 10:09:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:55.468 10:09:42 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:55.468 10:09:42 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:55.468 10:09:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:55.468 10:09:42 -- common/autotest_common.sh@10 -- # set +x 00:37:55.468 ************************************ 00:37:55.468 START TEST nvmf_identify_passthru 00:37:55.468 ************************************ 00:37:55.468 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:55.468 * Looking for test storage... 00:37:55.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:55.468 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:55.468 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:37:55.468 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:55.468 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:55.468 10:09:43 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:37:55.468 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:55.468 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:55.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:55.468 --rc genhtml_branch_coverage=1 00:37:55.468 --rc genhtml_function_coverage=1 00:37:55.468 --rc genhtml_legend=1 00:37:55.468 --rc geninfo_all_blocks=1 00:37:55.468 --rc geninfo_unexecuted_blocks=1 00:37:55.468 00:37:55.468 ' 00:37:55.468 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:55.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:55.468 --rc genhtml_branch_coverage=1 00:37:55.468 --rc genhtml_function_coverage=1 00:37:55.468 --rc genhtml_legend=1 00:37:55.468 --rc geninfo_all_blocks=1 00:37:55.468 --rc geninfo_unexecuted_blocks=1 00:37:55.468 00:37:55.468 ' 00:37:55.468 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:55.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:55.468 --rc genhtml_branch_coverage=1 00:37:55.469 --rc genhtml_function_coverage=1 00:37:55.469 --rc genhtml_legend=1 00:37:55.469 --rc geninfo_all_blocks=1 00:37:55.469 --rc geninfo_unexecuted_blocks=1 00:37:55.469 00:37:55.469 ' 00:37:55.469 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:55.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:55.469 --rc genhtml_branch_coverage=1 00:37:55.469 --rc genhtml_function_coverage=1 00:37:55.469 --rc genhtml_legend=1 00:37:55.469 --rc geninfo_all_blocks=1 00:37:55.469 --rc geninfo_unexecuted_blocks=1 00:37:55.469 00:37:55.469 ' 00:37:55.469 10:09:43 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:55.469 10:09:43 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:55.469 10:09:43 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:55.469 10:09:43 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:55.469 10:09:43 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:55.469 10:09:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:55.469 10:09:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:55.469 10:09:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:55.469 10:09:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:55.469 10:09:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:55.469 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:55.469 10:09:43 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:55.469 10:09:43 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:55.469 10:09:43 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:55.469 10:09:43 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:55.469 10:09:43 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:55.469 10:09:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:55.469 10:09:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:55.469 10:09:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:55.469 10:09:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:55.469 10:09:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:55.469 10:09:43 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:55.469 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:55.469 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:37:55.469 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@460 -- # nvmf_veth_init 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:37:55.728 Cannot find device "nvmf_init_br" 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:37:55.728 Cannot find device "nvmf_init_br2" 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:37:55.728 Cannot find device "nvmf_tgt_br" 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:37:55.728 Cannot find device "nvmf_tgt_br2" 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:37:55.728 Cannot find device "nvmf_init_br" 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:37:55.728 Cannot find device "nvmf_init_br2" 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:37:55.728 Cannot find device "nvmf_tgt_br" 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:37:55.728 Cannot find device "nvmf_tgt_br2" 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:37:55.728 Cannot find device "nvmf_br" 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:37:55.728 Cannot find device "nvmf_init_if" 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:37:55.728 Cannot find device "nvmf_init_if2" 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:55.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:55.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:55.728 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:37:55.988 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:55.988 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:37:55.988 00:37:55.988 --- 10.0.0.3 ping statistics --- 00:37:55.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.988 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:37:55.988 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:37:55.988 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:37:55.988 00:37:55.988 --- 10.0.0.4 ping statistics --- 00:37:55.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.988 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:55.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:55.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:37:55.988 00:37:55.988 --- 10.0.0.1 ping statistics --- 00:37:55.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.988 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:37:55.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:55.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:37:55.988 00:37:55.988 --- 10.0.0.2 ping statistics --- 00:37:55.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.988 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@461 -- # return 0 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:55.988 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:55.988 10:09:43 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:55.988 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:55.988 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:55.988 10:09:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:55.988 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:37:55.988 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:37:55.988 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:37:55.988 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:37:55.988 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:37:55.988 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:37:55.988 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:55.988 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:37:55.988 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:37:55.988 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:37:55.988 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:37:55.988 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:37:55.988 10:09:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:37:55.988 10:09:43 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:37:55.988 10:09:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:55.988 10:09:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:37:55.988 10:09:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:56.248 10:09:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:37:56.248 10:09:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:37:56.248 10:09:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:56.248 10:09:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:56.507 10:09:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:37:56.507 10:09:44 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:56.507 10:09:44 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:56.507 10:09:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:56.766 10:09:44 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:56.766 10:09:44 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:56.766 10:09:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:56.766 10:09:44 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=119070 00:37:56.766 10:09:44 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:56.766 10:09:44 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:56.766 10:09:44 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 119070 00:37:56.766 10:09:44 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 119070 ']' 00:37:56.766 10:09:44 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:56.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:56.766 10:09:44 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:56.766 10:09:44 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:56.766 10:09:44 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:56.766 10:09:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:56.766 [2024-12-05 10:09:44.318788] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:37:56.766 [2024-12-05 10:09:44.318991] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:57.025 [2024-12-05 10:09:44.511673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:57.284 [2024-12-05 10:09:44.668631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:57.284 [2024-12-05 10:09:44.668718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:57.284 [2024-12-05 10:09:44.668748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:57.284 [2024-12-05 10:09:44.668763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:57.284 [2024-12-05 10:09:44.668780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:57.284 [2024-12-05 10:09:44.671420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:57.284 [2024-12-05 10:09:44.671537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:57.284 [2024-12-05 10:09:44.671597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:57.284 [2024-12-05 10:09:44.671614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:57.852 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:57.852 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:37:57.852 10:09:45 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:57.852 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.852 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:57.852 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.852 10:09:45 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:57.852 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.852 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:58.110 [2024-12-05 10:09:45.640549] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:58.110 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.110 10:09:45 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:58.110 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.110 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:58.110 [2024-12-05 10:09:45.656842] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:58.110 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.110 10:09:45 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:58.110 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:58.110 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:58.110 10:09:45 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:37:58.110 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.110 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:58.369 Nvme0n1 00:37:58.369 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.369 10:09:45 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:58.369 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.369 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:58.369 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.369 10:09:45 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:58.369 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.369 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:58.369 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.369 10:09:45 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:37:58.369 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.369 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:58.369 [2024-12-05 10:09:45.821963] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:58.369 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.369 10:09:45 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:58.369 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.369 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:58.369 [ 00:37:58.369 { 00:37:58.369 "allow_any_host": true, 00:37:58.369 "hosts": [], 00:37:58.369 "listen_addresses": [], 00:37:58.369 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:58.369 "subtype": "Discovery" 00:37:58.369 }, 00:37:58.369 { 00:37:58.369 "allow_any_host": true, 00:37:58.369 "hosts": [], 00:37:58.369 "listen_addresses": [ 00:37:58.369 { 00:37:58.369 "adrfam": "IPv4", 00:37:58.369 "traddr": "10.0.0.3", 00:37:58.369 "trsvcid": "4420", 00:37:58.369 "trtype": "TCP" 00:37:58.369 } 00:37:58.369 ], 00:37:58.369 "max_cntlid": 65519, 00:37:58.369 "max_namespaces": 1, 00:37:58.369 "min_cntlid": 1, 00:37:58.369 "model_number": "SPDK bdev Controller", 00:37:58.369 "namespaces": [ 00:37:58.369 { 00:37:58.369 "bdev_name": "Nvme0n1", 00:37:58.369 "name": "Nvme0n1", 00:37:58.369 "nguid": "BF433031963B4BD28DA0C6B9EE164901", 00:37:58.369 "nsid": 1, 00:37:58.369 "uuid": "bf433031-963b-4bd2-8da0-c6b9ee164901" 00:37:58.369 } 00:37:58.369 ], 00:37:58.369 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:58.369 "serial_number": "SPDK00000000000001", 00:37:58.369 "subtype": "NVMe" 00:37:58.369 } 00:37:58.369 ] 00:37:58.369 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.369 10:09:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:58.369 10:09:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:58.369 10:09:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:58.627 10:09:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:37:58.627 10:09:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:58.627 10:09:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:58.627 10:09:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:59.193 10:09:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:37:59.193 10:09:46 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:37:59.193 10:09:46 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:37:59.193 10:09:46 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:59.193 10:09:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.194 10:09:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:59.194 10:09:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.194 10:09:46 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:59.194 10:09:46 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:59.194 10:09:46 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:59.194 10:09:46 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:37:59.194 10:09:46 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:59.194 10:09:46 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:37:59.194 10:09:46 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:59.194 10:09:46 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:59.194 rmmod nvme_tcp 00:37:59.194 rmmod nvme_fabrics 00:37:59.194 rmmod nvme_keyring 00:37:59.194 10:09:46 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:59.194 10:09:46 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:37:59.194 10:09:46 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:37:59.194 10:09:46 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 119070 ']' 00:37:59.194 10:09:46 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 119070 00:37:59.194 10:09:46 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 119070 ']' 00:37:59.194 10:09:46 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 119070 00:37:59.194 10:09:46 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:37:59.194 10:09:46 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:59.194 10:09:46 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 119070 00:37:59.194 killing process with pid 119070 00:37:59.194 10:09:46 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:59.194 10:09:46 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:59.194 10:09:46 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 119070' 00:37:59.194 10:09:46 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 119070 00:37:59.194 10:09:46 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 119070 00:38:00.128 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:00.128 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:00.128 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:00.128 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:38:00.128 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:38:00.128 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:00.128 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:38:00.128 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:00.128 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:38:00.128 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:38:00.128 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:38:00.128 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:38:00.388 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:38:00.388 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:38:00.388 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:38:00.388 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:38:00.388 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:38:00.388 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:38:00.388 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:38:00.388 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:38:00.388 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:00.388 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:00.388 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 00:38:00.388 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:00.388 10:09:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:00.388 10:09:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:00.388 10:09:47 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 00:38:00.388 00:38:00.388 real 0m5.076s 00:38:00.388 user 0m12.384s 00:38:00.388 sys 0m1.361s 00:38:00.388 10:09:47 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:00.388 10:09:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:00.388 ************************************ 00:38:00.388 END TEST nvmf_identify_passthru 00:38:00.388 ************************************ 00:38:00.388 10:09:47 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:38:00.388 10:09:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:00.388 10:09:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:00.388 10:09:47 -- common/autotest_common.sh@10 -- # set +x 00:38:00.388 ************************************ 00:38:00.388 START TEST nvmf_dif 00:38:00.388 ************************************ 00:38:00.388 10:09:48 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:38:00.648 * Looking for test storage... 00:38:00.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:38:00.648 10:09:48 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:00.648 10:09:48 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:38:00.648 10:09:48 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:00.648 10:09:48 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:38:00.648 10:09:48 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:00.648 10:09:48 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:00.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.648 --rc genhtml_branch_coverage=1 00:38:00.648 --rc genhtml_function_coverage=1 00:38:00.648 --rc genhtml_legend=1 00:38:00.648 --rc geninfo_all_blocks=1 00:38:00.648 --rc geninfo_unexecuted_blocks=1 00:38:00.648 00:38:00.648 ' 00:38:00.648 10:09:48 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:00.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.648 --rc genhtml_branch_coverage=1 00:38:00.648 --rc genhtml_function_coverage=1 00:38:00.648 --rc genhtml_legend=1 00:38:00.648 --rc geninfo_all_blocks=1 00:38:00.648 --rc geninfo_unexecuted_blocks=1 00:38:00.648 00:38:00.648 ' 00:38:00.648 10:09:48 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:00.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.648 --rc genhtml_branch_coverage=1 00:38:00.648 --rc genhtml_function_coverage=1 00:38:00.648 --rc genhtml_legend=1 00:38:00.648 --rc geninfo_all_blocks=1 00:38:00.648 --rc geninfo_unexecuted_blocks=1 00:38:00.648 00:38:00.648 ' 00:38:00.648 10:09:48 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:00.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.648 --rc genhtml_branch_coverage=1 00:38:00.648 --rc genhtml_function_coverage=1 00:38:00.648 --rc genhtml_legend=1 00:38:00.648 --rc geninfo_all_blocks=1 00:38:00.648 --rc geninfo_unexecuted_blocks=1 00:38:00.648 00:38:00.648 ' 00:38:00.648 10:09:48 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:00.648 10:09:48 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:38:00.648 10:09:48 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:00.648 10:09:48 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:00.648 10:09:48 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:00.648 10:09:48 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:00.648 10:09:48 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:00.648 10:09:48 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:00.648 10:09:48 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:00.648 10:09:48 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:00.648 10:09:48 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:00.648 10:09:48 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:00.648 10:09:48 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:38:00.648 10:09:48 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:38:00.648 10:09:48 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:00.648 10:09:48 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:00.648 10:09:48 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:00.648 10:09:48 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:00.648 10:09:48 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:00.648 10:09:48 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:00.648 10:09:48 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.648 10:09:48 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.648 10:09:48 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.648 10:09:48 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:38:00.649 10:09:48 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:00.649 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:00.649 10:09:48 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:38:00.649 10:09:48 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:38:00.649 10:09:48 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:38:00.649 10:09:48 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:38:00.649 10:09:48 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:00.649 10:09:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:00.649 10:09:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:38:00.649 Cannot find device "nvmf_init_br" 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@162 -- # true 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:38:00.649 Cannot find device "nvmf_init_br2" 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@163 -- # true 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:38:00.649 Cannot find device "nvmf_tgt_br" 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@164 -- # true 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:38:00.649 Cannot find device "nvmf_tgt_br2" 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@165 -- # true 00:38:00.649 10:09:48 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:38:00.908 Cannot find device "nvmf_init_br" 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@166 -- # true 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:38:00.908 Cannot find device "nvmf_init_br2" 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@167 -- # true 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:38:00.908 Cannot find device "nvmf_tgt_br" 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@168 -- # true 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:38:00.908 Cannot find device "nvmf_tgt_br2" 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@169 -- # true 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:38:00.908 Cannot find device "nvmf_br" 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@170 -- # true 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:38:00.908 Cannot find device "nvmf_init_if" 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@171 -- # true 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:38:00.908 Cannot find device "nvmf_init_if2" 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@172 -- # true 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:00.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@173 -- # true 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:00.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@174 -- # true 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:00.908 10:09:48 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:00.909 10:09:48 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:00.909 10:09:48 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:00.909 10:09:48 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:38:00.909 10:09:48 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:38:00.909 10:09:48 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:38:00.909 10:09:48 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:38:00.909 10:09:48 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:38:00.909 10:09:48 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:38:00.909 10:09:48 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:38:00.909 10:09:48 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:38:00.909 10:09:48 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:38:00.909 10:09:48 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:00.909 10:09:48 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:01.168 10:09:48 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:01.168 10:09:48 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:38:01.168 10:09:48 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:38:01.168 10:09:48 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:38:01.168 10:09:48 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:38:01.168 10:09:48 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:01.168 10:09:48 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:01.168 10:09:48 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:01.168 10:09:48 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:38:01.168 10:09:48 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:38:01.168 10:09:48 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:38:01.168 10:09:48 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:01.168 10:09:48 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:38:01.168 10:09:48 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:38:01.168 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:01.168 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:38:01.168 00:38:01.168 --- 10.0.0.3 ping statistics --- 00:38:01.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:01.168 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:38:01.168 10:09:48 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:38:01.168 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:38:01.169 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:38:01.169 00:38:01.169 --- 10.0.0.4 ping statistics --- 00:38:01.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:01.169 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:38:01.169 10:09:48 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:01.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:01.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:38:01.169 00:38:01.169 --- 10.0.0.1 ping statistics --- 00:38:01.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:01.169 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:38:01.169 10:09:48 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:38:01.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:01.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:38:01.169 00:38:01.169 --- 10.0.0.2 ping statistics --- 00:38:01.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:01.169 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:38:01.169 10:09:48 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:01.169 10:09:48 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:38:01.169 10:09:48 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:01.169 10:09:48 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:38:01.429 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:01.429 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:01.429 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:01.429 10:09:49 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:01.429 10:09:49 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:01.429 10:09:49 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:01.429 10:09:49 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:01.429 10:09:49 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:01.429 10:09:49 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:01.688 10:09:49 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:38:01.688 10:09:49 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:38:01.688 10:09:49 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:01.688 10:09:49 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:01.688 10:09:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:01.688 10:09:49 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=119503 00:38:01.688 10:09:49 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:01.688 10:09:49 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 119503 00:38:01.688 10:09:49 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 119503 ']' 00:38:01.688 10:09:49 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:01.688 10:09:49 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:01.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:01.688 10:09:49 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:01.688 10:09:49 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:01.688 10:09:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:01.688 [2024-12-05 10:09:49.225708] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:38:01.688 [2024-12-05 10:09:49.225907] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:01.948 [2024-12-05 10:09:49.420406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.208 [2024-12-05 10:09:49.583253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:02.208 [2024-12-05 10:09:49.583348] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:02.208 [2024-12-05 10:09:49.583375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:02.208 [2024-12-05 10:09:49.583415] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:02.208 [2024-12-05 10:09:49.583438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:02.208 [2024-12-05 10:09:49.585097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:02.777 10:09:50 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:02.777 10:09:50 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:38:02.777 10:09:50 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:02.777 10:09:50 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:02.777 10:09:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:02.777 10:09:50 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:02.777 10:09:50 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:38:02.777 10:09:50 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:38:02.777 10:09:50 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.777 10:09:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:02.777 [2024-12-05 10:09:50.253558] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:02.777 10:09:50 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.777 10:09:50 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:38:02.777 10:09:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:02.777 10:09:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:02.777 10:09:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:02.777 ************************************ 00:38:02.777 START TEST fio_dif_1_default 00:38:02.777 ************************************ 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:02.777 bdev_null0 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:02.777 [2024-12-05 10:09:50.301818] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:02.777 10:09:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:02.777 { 00:38:02.777 "params": { 00:38:02.777 "name": "Nvme$subsystem", 00:38:02.777 "trtype": "$TEST_TRANSPORT", 00:38:02.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:02.777 "adrfam": "ipv4", 00:38:02.777 "trsvcid": "$NVMF_PORT", 00:38:02.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:02.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:02.778 "hdgst": ${hdgst:-false}, 00:38:02.778 "ddgst": ${ddgst:-false} 00:38:02.778 }, 00:38:02.778 "method": "bdev_nvme_attach_controller" 00:38:02.778 } 00:38:02.778 EOF 00:38:02.778 )") 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:02.778 "params": { 00:38:02.778 "name": "Nvme0", 00:38:02.778 "trtype": "tcp", 00:38:02.778 "traddr": "10.0.0.3", 00:38:02.778 "adrfam": "ipv4", 00:38:02.778 "trsvcid": "4420", 00:38:02.778 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:02.778 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:02.778 "hdgst": false, 00:38:02.778 "ddgst": false 00:38:02.778 }, 00:38:02.778 "method": "bdev_nvme_attach_controller" 00:38:02.778 }' 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:02.778 10:09:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:03.037 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:03.037 fio-3.35 00:38:03.037 Starting 1 thread 00:38:15.254 00:38:15.254 filename0: (groupid=0, jobs=1): err= 0: pid=119587: Thu Dec 5 10:10:01 2024 00:38:15.254 read: IOPS=126, BW=506KiB/s (518kB/s)(5072KiB/10022msec) 00:38:15.254 slat (nsec): min=7269, max=53644, avg=11107.20, stdev=5355.66 00:38:15.254 clat (usec): min=477, max=41510, avg=31578.30, stdev=17137.26 00:38:15.254 lat (usec): min=484, max=41531, avg=31589.41, stdev=17137.14 00:38:15.254 clat percentiles (usec): 00:38:15.254 | 1.00th=[ 486], 5.00th=[ 502], 10.00th=[ 510], 20.00th=[ 545], 00:38:15.254 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:15.254 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:38:15.254 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:38:15.254 | 99.99th=[41681] 00:38:15.254 bw ( KiB/s): min= 384, max= 704, per=99.79%, avg=505.60, stdev=84.09, samples=20 00:38:15.254 iops : min= 96, max= 176, avg=126.40, stdev=21.02, samples=20 00:38:15.254 lat (usec) : 500=5.21%, 750=17.51%, 1000=0.32% 00:38:15.254 lat (msec) : 2=0.32%, 50=76.66% 00:38:15.254 cpu : usr=93.44%, sys=6.03%, ctx=23, majf=0, minf=1636 00:38:15.254 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:15.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:15.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:15.254 issued rwts: total=1268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:15.254 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:15.254 00:38:15.254 Run status group 0 (all jobs): 00:38:15.254 READ: bw=506KiB/s (518kB/s), 506KiB/s-506KiB/s (518kB/s-518kB/s), io=5072KiB (5194kB), run=10022-10022msec 00:38:15.254 ----------------------------------------------------- 00:38:15.254 Suppressions used: 00:38:15.254 count bytes template 00:38:15.254 1 8 /usr/src/fio/parse.c 00:38:15.254 1 8 libtcmalloc_minimal.so 00:38:15.254 1 904 libcrypto.so 00:38:15.254 ----------------------------------------------------- 00:38:15.254 00:38:15.254 10:10:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:15.254 10:10:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:15.254 10:10:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:15.254 10:10:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.255 00:38:15.255 real 0m12.324s 00:38:15.255 user 0m11.182s 00:38:15.255 sys 0m1.016s 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:15.255 ************************************ 00:38:15.255 END TEST fio_dif_1_default 00:38:15.255 ************************************ 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:15.255 10:10:02 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:15.255 10:10:02 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:15.255 10:10:02 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:15.255 10:10:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:15.255 ************************************ 00:38:15.255 START TEST fio_dif_1_multi_subsystems 00:38:15.255 ************************************ 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.255 bdev_null0 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.255 [2024-12-05 10:10:02.674597] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.255 bdev_null1 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:15.255 { 00:38:15.255 "params": { 00:38:15.255 "name": "Nvme$subsystem", 00:38:15.255 "trtype": "$TEST_TRANSPORT", 00:38:15.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:15.255 "adrfam": "ipv4", 00:38:15.255 "trsvcid": "$NVMF_PORT", 00:38:15.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:15.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:15.255 "hdgst": ${hdgst:-false}, 00:38:15.255 "ddgst": ${ddgst:-false} 00:38:15.255 }, 00:38:15.255 "method": "bdev_nvme_attach_controller" 00:38:15.255 } 00:38:15.255 EOF 00:38:15.255 )") 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:15.255 { 00:38:15.255 "params": { 00:38:15.255 "name": "Nvme$subsystem", 00:38:15.255 "trtype": "$TEST_TRANSPORT", 00:38:15.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:15.255 "adrfam": "ipv4", 00:38:15.255 "trsvcid": "$NVMF_PORT", 00:38:15.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:15.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:15.255 "hdgst": ${hdgst:-false}, 00:38:15.255 "ddgst": ${ddgst:-false} 00:38:15.255 }, 00:38:15.255 "method": "bdev_nvme_attach_controller" 00:38:15.255 } 00:38:15.255 EOF 00:38:15.255 )") 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:38:15.255 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:15.255 "params": { 00:38:15.255 "name": "Nvme0", 00:38:15.255 "trtype": "tcp", 00:38:15.256 "traddr": "10.0.0.3", 00:38:15.256 "adrfam": "ipv4", 00:38:15.256 "trsvcid": "4420", 00:38:15.256 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:15.256 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:15.256 "hdgst": false, 00:38:15.256 "ddgst": false 00:38:15.256 }, 00:38:15.256 "method": "bdev_nvme_attach_controller" 00:38:15.256 },{ 00:38:15.256 "params": { 00:38:15.256 "name": "Nvme1", 00:38:15.256 "trtype": "tcp", 00:38:15.256 "traddr": "10.0.0.3", 00:38:15.256 "adrfam": "ipv4", 00:38:15.256 "trsvcid": "4420", 00:38:15.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:15.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:15.256 "hdgst": false, 00:38:15.256 "ddgst": false 00:38:15.256 }, 00:38:15.256 "method": "bdev_nvme_attach_controller" 00:38:15.256 }' 00:38:15.256 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:15.256 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:15.256 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:38:15.256 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:15.256 10:10:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:15.515 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:15.515 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:15.515 fio-3.35 00:38:15.515 Starting 2 threads 00:38:27.721 00:38:27.721 filename0: (groupid=0, jobs=1): err= 0: pid=119746: Thu Dec 5 10:10:13 2024 00:38:27.722 read: IOPS=119, BW=479KiB/s (491kB/s)(4800KiB/10013msec) 00:38:27.722 slat (nsec): min=7310, max=67310, avg=11673.42, stdev=5669.21 00:38:27.722 clat (usec): min=461, max=41488, avg=33336.94, stdev=15890.96 00:38:27.722 lat (usec): min=469, max=41502, avg=33348.62, stdev=15890.81 00:38:27.722 clat percentiles (usec): 00:38:27.722 | 1.00th=[ 486], 5.00th=[ 502], 10.00th=[ 515], 20.00th=[40633], 00:38:27.722 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:27.722 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:38:27.722 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:38:27.722 | 99.99th=[41681] 00:38:27.722 bw ( KiB/s): min= 384, max= 608, per=48.69%, avg=478.40, stdev=58.25, samples=20 00:38:27.722 iops : min= 96, max= 152, avg=119.60, stdev=14.56, samples=20 00:38:27.722 lat (usec) : 500=4.33%, 750=14.33% 00:38:27.722 lat (msec) : 2=0.33%, 50=81.00% 00:38:27.722 cpu : usr=95.88%, sys=3.63%, ctx=18, majf=0, minf=1636 00:38:27.722 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:27.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:27.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:27.722 issued rwts: total=1200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:27.722 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:27.722 filename1: (groupid=0, jobs=1): err= 0: pid=119747: Thu Dec 5 10:10:13 2024 00:38:27.722 read: IOPS=125, BW=503KiB/s (515kB/s)(5040KiB/10024msec) 00:38:27.722 slat (nsec): min=7292, max=58857, avg=11109.74, stdev=4824.51 00:38:27.722 clat (usec): min=466, max=41557, avg=31786.05, stdev=17015.01 00:38:27.722 lat (usec): min=474, max=41570, avg=31797.16, stdev=17014.96 00:38:27.722 clat percentiles (usec): 00:38:27.722 | 1.00th=[ 482], 5.00th=[ 498], 10.00th=[ 510], 20.00th=[ 545], 00:38:27.722 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:27.722 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:38:27.722 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:38:27.722 | 99.99th=[41681] 00:38:27.722 bw ( KiB/s): min= 416, max= 640, per=51.14%, avg=502.40, stdev=63.24, samples=20 00:38:27.722 iops : min= 104, max= 160, avg=125.60, stdev=15.81, samples=20 00:38:27.722 lat (usec) : 500=5.71%, 750=16.19%, 1000=0.32% 00:38:27.722 lat (msec) : 2=0.63%, 50=77.14% 00:38:27.722 cpu : usr=95.91%, sys=3.62%, ctx=18, majf=0, minf=1636 00:38:27.722 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:27.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:27.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:27.722 issued rwts: total=1260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:27.722 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:27.722 00:38:27.722 Run status group 0 (all jobs): 00:38:27.722 READ: bw=982KiB/s (1005kB/s), 479KiB/s-503KiB/s (491kB/s-515kB/s), io=9840KiB (10.1MB), run=10013-10024msec 00:38:27.722 ----------------------------------------------------- 00:38:27.722 Suppressions used: 00:38:27.722 count bytes template 00:38:27.722 2 16 /usr/src/fio/parse.c 00:38:27.722 1 8 libtcmalloc_minimal.so 00:38:27.722 1 904 libcrypto.so 00:38:27.722 ----------------------------------------------------- 00:38:27.722 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.722 00:38:27.722 real 0m12.490s 00:38:27.722 user 0m21.204s 00:38:27.722 sys 0m1.135s 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:27.722 ************************************ 00:38:27.722 END TEST fio_dif_1_multi_subsystems 00:38:27.722 10:10:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:27.722 ************************************ 00:38:27.722 10:10:15 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:27.722 10:10:15 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:27.722 10:10:15 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:27.722 10:10:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:27.722 ************************************ 00:38:27.722 START TEST fio_dif_rand_params 00:38:27.722 ************************************ 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:27.722 bdev_null0 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:27.722 [2024-12-05 10:10:15.218398] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:27.722 { 00:38:27.722 "params": { 00:38:27.722 "name": "Nvme$subsystem", 00:38:27.722 "trtype": "$TEST_TRANSPORT", 00:38:27.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:27.722 "adrfam": "ipv4", 00:38:27.722 "trsvcid": "$NVMF_PORT", 00:38:27.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:27.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:27.722 "hdgst": ${hdgst:-false}, 00:38:27.722 "ddgst": ${ddgst:-false} 00:38:27.722 }, 00:38:27.722 "method": "bdev_nvme_attach_controller" 00:38:27.722 } 00:38:27.722 EOF 00:38:27.722 )") 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:27.722 "params": { 00:38:27.722 "name": "Nvme0", 00:38:27.722 "trtype": "tcp", 00:38:27.722 "traddr": "10.0.0.3", 00:38:27.722 "adrfam": "ipv4", 00:38:27.722 "trsvcid": "4420", 00:38:27.722 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:27.722 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:27.722 "hdgst": false, 00:38:27.722 "ddgst": false 00:38:27.722 }, 00:38:27.722 "method": "bdev_nvme_attach_controller" 00:38:27.722 }' 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:27.722 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:38:27.723 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:27.723 10:10:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:27.989 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:27.989 ... 00:38:27.989 fio-3.35 00:38:27.989 Starting 3 threads 00:38:34.551 00:38:34.551 filename0: (groupid=0, jobs=1): err= 0: pid=119902: Thu Dec 5 10:10:21 2024 00:38:34.551 read: IOPS=208, BW=26.1MiB/s (27.3MB/s)(131MiB/5029msec) 00:38:34.551 slat (nsec): min=6433, max=68655, avg=17711.14, stdev=7087.12 00:38:34.551 clat (usec): min=5834, max=56187, avg=14356.25, stdev=11652.40 00:38:34.551 lat (usec): min=5849, max=56212, avg=14373.96, stdev=11652.63 00:38:34.551 clat percentiles (usec): 00:38:34.551 | 1.00th=[ 6456], 5.00th=[ 7046], 10.00th=[ 7767], 20.00th=[ 8225], 00:38:34.551 | 30.00th=[ 8717], 40.00th=[10683], 50.00th=[11994], 60.00th=[12649], 00:38:34.551 | 70.00th=[13042], 80.00th=[13698], 90.00th=[14877], 95.00th=[50594], 00:38:34.551 | 99.00th=[54789], 99.50th=[55313], 99.90th=[55837], 99.95th=[56361], 00:38:34.551 | 99.99th=[56361] 00:38:34.551 bw ( KiB/s): min=23296, max=33536, per=30.05%, avg=26783.40, stdev=3310.62, samples=10 00:38:34.551 iops : min= 182, max= 262, avg=209.20, stdev=25.82, samples=10 00:38:34.551 lat (msec) : 10=36.99%, 20=54.43%, 50=2.86%, 100=5.72% 00:38:34.551 cpu : usr=94.13%, sys=4.28%, ctx=69, majf=0, minf=1636 00:38:34.551 IO depths : 1=3.3%, 2=96.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:34.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:34.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:34.551 issued rwts: total=1049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:34.551 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:34.551 filename0: (groupid=0, jobs=1): err= 0: pid=119903: Thu Dec 5 10:10:21 2024 00:38:34.551 read: IOPS=201, BW=25.2MiB/s (26.4MB/s)(126MiB/5001msec) 00:38:34.551 slat (usec): min=4, max=101, avg=18.61, stdev= 9.10 00:38:34.551 clat (usec): min=5384, max=53574, avg=14867.47, stdev=13308.91 00:38:34.551 lat (usec): min=5399, max=53583, avg=14886.08, stdev=13308.89 00:38:34.551 clat percentiles (usec): 00:38:34.551 | 1.00th=[ 6587], 5.00th=[ 7701], 10.00th=[ 8029], 20.00th=[ 8356], 00:38:34.551 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[10683], 60.00th=[11076], 00:38:34.551 | 70.00th=[11338], 80.00th=[11731], 90.00th=[48497], 95.00th=[51643], 00:38:34.551 | 99.00th=[52691], 99.50th=[53216], 99.90th=[53216], 99.95th=[53740], 00:38:34.551 | 99.99th=[53740] 00:38:34.551 bw ( KiB/s): min=20736, max=31232, per=29.73%, avg=26503.56, stdev=3519.93, samples=9 00:38:34.551 iops : min= 162, max= 244, avg=207.00, stdev=27.44, samples=9 00:38:34.551 lat (msec) : 10=37.84%, 20=50.25%, 50=3.18%, 100=8.74% 00:38:34.551 cpu : usr=94.80%, sys=3.82%, ctx=10, majf=0, minf=1636 00:38:34.551 IO depths : 1=6.2%, 2=93.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:34.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:34.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:34.551 issued rwts: total=1007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:34.551 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:34.551 filename0: (groupid=0, jobs=1): err= 0: pid=119904: Thu Dec 5 10:10:21 2024 00:38:34.551 read: IOPS=289, BW=36.1MiB/s (37.9MB/s)(181MiB/5003msec) 00:38:34.551 slat (nsec): min=6025, max=57386, avg=13187.33, stdev=6926.72 00:38:34.551 clat (usec): min=4349, max=48315, avg=10345.52, stdev=4030.90 00:38:34.551 lat (usec): min=4361, max=48322, avg=10358.71, stdev=4031.00 00:38:34.551 clat percentiles (usec): 00:38:34.551 | 1.00th=[ 4359], 5.00th=[ 4424], 10.00th=[ 4490], 20.00th=[ 6718], 00:38:34.551 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[11076], 00:38:34.551 | 70.00th=[13566], 80.00th=[14091], 90.00th=[14877], 95.00th=[15270], 00:38:34.551 | 99.00th=[16450], 99.50th=[16712], 99.90th=[47973], 99.95th=[48497], 00:38:34.551 | 99.99th=[48497] 00:38:34.551 bw ( KiB/s): min=30720, max=43776, per=41.07%, avg=36608.00, stdev=3840.00, samples=9 00:38:34.551 iops : min= 240, max= 342, avg=286.00, stdev=30.00, samples=9 00:38:34.551 lat (msec) : 10=50.97%, 20=48.82%, 50=0.21% 00:38:34.551 cpu : usr=92.76%, sys=5.34%, ctx=10, majf=0, minf=1634 00:38:34.551 IO depths : 1=31.5%, 2=68.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:34.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:34.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:34.551 issued rwts: total=1446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:34.551 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:34.551 00:38:34.551 Run status group 0 (all jobs): 00:38:34.551 READ: bw=87.0MiB/s (91.3MB/s), 25.2MiB/s-36.1MiB/s (26.4MB/s-37.9MB/s), io=438MiB (459MB), run=5001-5029msec 00:38:35.119 ----------------------------------------------------- 00:38:35.119 Suppressions used: 00:38:35.119 count bytes template 00:38:35.119 5 44 /usr/src/fio/parse.c 00:38:35.119 1 8 libtcmalloc_minimal.so 00:38:35.119 1 904 libcrypto.so 00:38:35.119 ----------------------------------------------------- 00:38:35.119 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.119 bdev_null0 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.119 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.120 [2024-12-05 10:10:22.565366] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.120 bdev_null1 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.120 bdev_null2 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:35.120 { 00:38:35.120 "params": { 00:38:35.120 "name": "Nvme$subsystem", 00:38:35.120 "trtype": "$TEST_TRANSPORT", 00:38:35.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:35.120 "adrfam": "ipv4", 00:38:35.120 "trsvcid": "$NVMF_PORT", 00:38:35.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:35.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:35.120 "hdgst": ${hdgst:-false}, 00:38:35.120 "ddgst": ${ddgst:-false} 00:38:35.120 }, 00:38:35.120 "method": "bdev_nvme_attach_controller" 00:38:35.120 } 00:38:35.120 EOF 00:38:35.120 )") 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:35.120 { 00:38:35.120 "params": { 00:38:35.120 "name": "Nvme$subsystem", 00:38:35.120 "trtype": "$TEST_TRANSPORT", 00:38:35.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:35.120 "adrfam": "ipv4", 00:38:35.120 "trsvcid": "$NVMF_PORT", 00:38:35.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:35.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:35.120 "hdgst": ${hdgst:-false}, 00:38:35.120 "ddgst": ${ddgst:-false} 00:38:35.120 }, 00:38:35.120 "method": "bdev_nvme_attach_controller" 00:38:35.120 } 00:38:35.120 EOF 00:38:35.120 )") 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:35.120 { 00:38:35.120 "params": { 00:38:35.120 "name": "Nvme$subsystem", 00:38:35.120 "trtype": "$TEST_TRANSPORT", 00:38:35.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:35.120 "adrfam": "ipv4", 00:38:35.120 "trsvcid": "$NVMF_PORT", 00:38:35.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:35.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:35.120 "hdgst": ${hdgst:-false}, 00:38:35.120 "ddgst": ${ddgst:-false} 00:38:35.120 }, 00:38:35.120 "method": "bdev_nvme_attach_controller" 00:38:35.120 } 00:38:35.120 EOF 00:38:35.120 )") 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:35.120 10:10:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:35.120 "params": { 00:38:35.120 "name": "Nvme0", 00:38:35.120 "trtype": "tcp", 00:38:35.120 "traddr": "10.0.0.3", 00:38:35.120 "adrfam": "ipv4", 00:38:35.120 "trsvcid": "4420", 00:38:35.120 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:35.120 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:35.120 "hdgst": false, 00:38:35.120 "ddgst": false 00:38:35.120 }, 00:38:35.120 "method": "bdev_nvme_attach_controller" 00:38:35.120 },{ 00:38:35.120 "params": { 00:38:35.120 "name": "Nvme1", 00:38:35.120 "trtype": "tcp", 00:38:35.120 "traddr": "10.0.0.3", 00:38:35.120 "adrfam": "ipv4", 00:38:35.120 "trsvcid": "4420", 00:38:35.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:35.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:35.120 "hdgst": false, 00:38:35.120 "ddgst": false 00:38:35.120 }, 00:38:35.120 "method": "bdev_nvme_attach_controller" 00:38:35.120 },{ 00:38:35.120 "params": { 00:38:35.120 "name": "Nvme2", 00:38:35.120 "trtype": "tcp", 00:38:35.120 "traddr": "10.0.0.3", 00:38:35.120 "adrfam": "ipv4", 00:38:35.120 "trsvcid": "4420", 00:38:35.120 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:35.120 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:35.121 "hdgst": false, 00:38:35.121 "ddgst": false 00:38:35.121 }, 00:38:35.121 "method": "bdev_nvme_attach_controller" 00:38:35.121 }' 00:38:35.121 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:35.121 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:35.121 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:38:35.121 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:35.121 10:10:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:35.379 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:35.379 ... 00:38:35.379 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:35.379 ... 00:38:35.379 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:35.379 ... 00:38:35.379 fio-3.35 00:38:35.379 Starting 24 threads 00:38:47.590 00:38:47.590 filename0: (groupid=0, jobs=1): err= 0: pid=120000: Thu Dec 5 10:10:34 2024 00:38:47.590 read: IOPS=158, BW=636KiB/s (651kB/s)(6400KiB/10067msec) 00:38:47.590 slat (usec): min=6, max=8076, avg=20.96, stdev=201.73 00:38:47.590 clat (msec): min=13, max=218, avg=100.46, stdev=39.97 00:38:47.590 lat (msec): min=13, max=218, avg=100.48, stdev=39.96 00:38:47.590 clat percentiles (msec): 00:38:47.590 | 1.00th=[ 22], 5.00th=[ 44], 10.00th=[ 52], 20.00th=[ 62], 00:38:47.590 | 30.00th=[ 72], 40.00th=[ 86], 50.00th=[ 101], 60.00th=[ 114], 00:38:47.590 | 70.00th=[ 124], 80.00th=[ 133], 90.00th=[ 157], 95.00th=[ 167], 00:38:47.590 | 99.00th=[ 197], 99.50th=[ 211], 99.90th=[ 220], 99.95th=[ 220], 00:38:47.590 | 99.99th=[ 220] 00:38:47.590 bw ( KiB/s): min= 400, max= 1024, per=3.95%, avg=633.60, stdev=213.71, samples=20 00:38:47.590 iops : min= 100, max= 256, avg=158.40, stdev=53.43, samples=20 00:38:47.590 lat (msec) : 20=0.44%, 50=8.38%, 100=41.50%, 250=49.69% 00:38:47.590 cpu : usr=34.20%, sys=0.56%, ctx=1245, majf=0, minf=1634 00:38:47.590 IO depths : 1=1.7%, 2=3.4%, 4=12.2%, 8=70.7%, 16=11.9%, 32=0.0%, >=64=0.0% 00:38:47.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.590 complete : 0=0.0%, 4=90.6%, 8=4.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.590 issued rwts: total=1600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.590 filename0: (groupid=0, jobs=1): err= 0: pid=120001: Thu Dec 5 10:10:34 2024 00:38:47.590 read: IOPS=188, BW=753KiB/s (771kB/s)(7552KiB/10034msec) 00:38:47.590 slat (usec): min=5, max=3299, avg=15.88, stdev=76.00 00:38:47.590 clat (msec): min=7, max=218, avg=84.83, stdev=32.98 00:38:47.590 lat (msec): min=7, max=218, avg=84.85, stdev=32.98 00:38:47.590 clat percentiles (msec): 00:38:47.590 | 1.00th=[ 19], 5.00th=[ 29], 10.00th=[ 50], 20.00th=[ 58], 00:38:47.590 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 83], 60.00th=[ 89], 00:38:47.590 | 70.00th=[ 103], 80.00th=[ 117], 90.00th=[ 129], 95.00th=[ 136], 00:38:47.590 | 99.00th=[ 161], 99.50th=[ 215], 99.90th=[ 220], 99.95th=[ 220], 00:38:47.590 | 99.99th=[ 220] 00:38:47.590 bw ( KiB/s): min= 464, max= 1408, per=4.67%, avg=749.40, stdev=240.52, samples=20 00:38:47.590 iops : min= 116, max= 352, avg=187.35, stdev=60.13, samples=20 00:38:47.590 lat (msec) : 10=0.11%, 20=1.48%, 50=9.53%, 100=57.57%, 250=31.30% 00:38:47.590 cpu : usr=40.96%, sys=0.79%, ctx=1212, majf=0, minf=1635 00:38:47.590 IO depths : 1=0.9%, 2=2.4%, 4=10.2%, 8=74.2%, 16=12.3%, 32=0.0%, >=64=0.0% 00:38:47.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.590 complete : 0=0.0%, 4=90.1%, 8=5.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.590 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.590 filename0: (groupid=0, jobs=1): err= 0: pid=120002: Thu Dec 5 10:10:34 2024 00:38:47.590 read: IOPS=154, BW=618KiB/s (633kB/s)(6200KiB/10028msec) 00:38:47.590 slat (nsec): min=5658, max=58596, avg=16452.58, stdev=8562.24 00:38:47.590 clat (msec): min=31, max=207, avg=103.33, stdev=32.54 00:38:47.590 lat (msec): min=31, max=207, avg=103.35, stdev=32.54 00:38:47.590 clat percentiles (msec): 00:38:47.590 | 1.00th=[ 36], 5.00th=[ 55], 10.00th=[ 62], 20.00th=[ 74], 00:38:47.590 | 30.00th=[ 85], 40.00th=[ 93], 50.00th=[ 101], 60.00th=[ 110], 00:38:47.590 | 70.00th=[ 125], 80.00th=[ 132], 90.00th=[ 144], 95.00th=[ 157], 00:38:47.590 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 209], 99.95th=[ 209], 00:38:47.590 | 99.99th=[ 209] 00:38:47.590 bw ( KiB/s): min= 432, max= 944, per=3.82%, avg=613.45, stdev=152.67, samples=20 00:38:47.590 iops : min= 108, max= 236, avg=153.35, stdev=38.17, samples=20 00:38:47.590 lat (msec) : 50=3.23%, 100=47.16%, 250=49.61% 00:38:47.590 cpu : usr=36.56%, sys=0.64%, ctx=1360, majf=0, minf=1633 00:38:47.590 IO depths : 1=0.6%, 2=1.4%, 4=7.2%, 8=76.3%, 16=14.5%, 32=0.0%, >=64=0.0% 00:38:47.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.590 complete : 0=0.0%, 4=90.2%, 8=6.4%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.590 issued rwts: total=1550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.590 filename0: (groupid=0, jobs=1): err= 0: pid=120003: Thu Dec 5 10:10:34 2024 00:38:47.590 read: IOPS=193, BW=774KiB/s (793kB/s)(7784KiB/10051msec) 00:38:47.590 slat (usec): min=4, max=4040, avg=22.61, stdev=157.95 00:38:47.590 clat (usec): min=1867, max=227641, avg=82328.12, stdev=42422.03 00:38:47.590 lat (usec): min=1904, max=227650, avg=82350.73, stdev=42418.95 00:38:47.590 clat percentiles (msec): 00:38:47.590 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 17], 20.00th=[ 50], 00:38:47.590 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 91], 00:38:47.590 | 70.00th=[ 107], 80.00th=[ 120], 90.00th=[ 130], 95.00th=[ 157], 00:38:47.590 | 99.00th=[ 192], 99.50th=[ 194], 99.90th=[ 228], 99.95th=[ 228], 00:38:47.590 | 99.99th=[ 228] 00:38:47.590 bw ( KiB/s): min= 384, max= 2738, per=4.82%, avg=773.90, stdev=495.91, samples=20 00:38:47.590 iops : min= 96, max= 684, avg=193.45, stdev=123.87, samples=20 00:38:47.590 lat (msec) : 2=0.26%, 4=2.42%, 10=5.34%, 20=3.19%, 50=9.30% 00:38:47.590 lat (msec) : 100=47.69%, 250=31.81% 00:38:47.590 cpu : usr=37.39%, sys=0.79%, ctx=1019, majf=0, minf=1636 00:38:47.590 IO depths : 1=1.5%, 2=3.3%, 4=10.6%, 8=72.8%, 16=11.8%, 32=0.0%, >=64=0.0% 00:38:47.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.590 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.590 issued rwts: total=1946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.590 filename0: (groupid=0, jobs=1): err= 0: pid=120004: Thu Dec 5 10:10:34 2024 00:38:47.590 read: IOPS=165, BW=661KiB/s (677kB/s)(6636KiB/10033msec) 00:38:47.590 slat (usec): min=6, max=8026, avg=34.14, stdev=381.52 00:38:47.590 clat (msec): min=4, max=226, avg=96.49, stdev=41.61 00:38:47.590 lat (msec): min=4, max=226, avg=96.53, stdev=41.62 00:38:47.590 clat percentiles (msec): 00:38:47.590 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 39], 20.00th=[ 63], 00:38:47.590 | 30.00th=[ 82], 40.00th=[ 88], 50.00th=[ 96], 60.00th=[ 109], 00:38:47.590 | 70.00th=[ 120], 80.00th=[ 131], 90.00th=[ 148], 95.00th=[ 165], 00:38:47.590 | 99.00th=[ 192], 99.50th=[ 215], 99.90th=[ 228], 99.95th=[ 228], 00:38:47.590 | 99.99th=[ 228] 00:38:47.590 bw ( KiB/s): min= 384, max= 1904, per=4.09%, avg=656.30, stdev=324.20, samples=20 00:38:47.590 iops : min= 96, max= 476, avg=164.00, stdev=81.05, samples=20 00:38:47.590 lat (msec) : 10=3.74%, 20=2.77%, 50=7.84%, 100=39.18%, 250=46.47% 00:38:47.590 cpu : usr=32.67%, sys=0.62%, ctx=888, majf=0, minf=1636 00:38:47.590 IO depths : 1=3.1%, 2=6.5%, 4=16.7%, 8=64.0%, 16=9.7%, 32=0.0%, >=64=0.0% 00:38:47.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.590 complete : 0=0.0%, 4=91.5%, 8=3.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.590 issued rwts: total=1659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.590 filename0: (groupid=0, jobs=1): err= 0: pid=120005: Thu Dec 5 10:10:34 2024 00:38:47.590 read: IOPS=243, BW=975KiB/s (998kB/s)(9784KiB/10039msec) 00:38:47.590 slat (usec): min=5, max=8039, avg=18.93, stdev=162.46 00:38:47.590 clat (usec): min=807, max=196900, avg=65541.91, stdev=45146.71 00:38:47.590 lat (usec): min=819, max=196910, avg=65560.83, stdev=45148.61 00:38:47.590 clat percentiles (usec): 00:38:47.590 | 1.00th=[ 1467], 5.00th=[ 1778], 10.00th=[ 1975], 20.00th=[ 4490], 00:38:47.590 | 30.00th=[ 42206], 40.00th=[ 59507], 50.00th=[ 74974], 60.00th=[ 83362], 00:38:47.590 | 70.00th=[ 86508], 80.00th=[ 99091], 90.00th=[121111], 95.00th=[137364], 00:38:47.590 | 99.00th=[166724], 99.50th=[183501], 99.90th=[196084], 99.95th=[196084], 00:38:47.590 | 99.99th=[196084] 00:38:47.590 bw ( KiB/s): min= 384, max= 6144, per=6.06%, avg=972.00, stdev=1226.34, samples=20 00:38:47.590 iops : min= 96, max= 1536, avg=243.00, stdev=306.58, samples=20 00:38:47.590 lat (usec) : 1000=0.33% 00:38:47.590 lat (msec) : 2=10.10%, 4=7.89%, 10=5.23%, 20=3.27%, 50=6.46% 00:38:47.590 lat (msec) : 100=47.34%, 250=19.38% 00:38:47.590 cpu : usr=40.89%, sys=0.88%, ctx=983, majf=0, minf=1634 00:38:47.590 IO depths : 1=1.4%, 2=3.9%, 4=14.7%, 8=68.3%, 16=11.7%, 32=0.0%, >=64=0.0% 00:38:47.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.590 complete : 0=0.0%, 4=91.5%, 8=3.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.590 issued rwts: total=2446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.590 filename0: (groupid=0, jobs=1): err= 0: pid=120006: Thu Dec 5 10:10:34 2024 00:38:47.590 read: IOPS=165, BW=663KiB/s (679kB/s)(6664KiB/10044msec) 00:38:47.590 slat (usec): min=4, max=11040, avg=26.57, stdev=307.48 00:38:47.590 clat (msec): min=15, max=195, avg=96.21, stdev=33.58 00:38:47.590 lat (msec): min=15, max=195, avg=96.24, stdev=33.58 00:38:47.590 clat percentiles (msec): 00:38:47.590 | 1.00th=[ 21], 5.00th=[ 37], 10.00th=[ 54], 20.00th=[ 72], 00:38:47.590 | 30.00th=[ 81], 40.00th=[ 86], 50.00th=[ 91], 60.00th=[ 102], 00:38:47.590 | 70.00th=[ 115], 80.00th=[ 128], 90.00th=[ 136], 95.00th=[ 146], 00:38:47.590 | 99.00th=[ 192], 99.50th=[ 197], 99.90th=[ 197], 99.95th=[ 197], 00:38:47.590 | 99.99th=[ 197] 00:38:47.590 bw ( KiB/s): min= 384, max= 1368, per=4.11%, avg=659.55, stdev=216.14, samples=20 00:38:47.590 iops : min= 96, max= 342, avg=164.85, stdev=54.01, samples=20 00:38:47.590 lat (msec) : 20=0.84%, 50=7.62%, 100=50.42%, 250=41.12% 00:38:47.591 cpu : usr=44.44%, sys=0.87%, ctx=1357, majf=0, minf=1635 00:38:47.591 IO depths : 1=3.4%, 2=7.1%, 4=17.0%, 8=63.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:38:47.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.591 complete : 0=0.0%, 4=92.0%, 8=2.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.591 issued rwts: total=1666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.591 filename0: (groupid=0, jobs=1): err= 0: pid=120007: Thu Dec 5 10:10:34 2024 00:38:47.591 read: IOPS=179, BW=718KiB/s (735kB/s)(7208KiB/10038msec) 00:38:47.591 slat (usec): min=5, max=4063, avg=17.20, stdev=95.70 00:38:47.591 clat (msec): min=25, max=186, avg=88.86, stdev=26.50 00:38:47.591 lat (msec): min=25, max=187, avg=88.88, stdev=26.50 00:38:47.591 clat percentiles (msec): 00:38:47.591 | 1.00th=[ 26], 5.00th=[ 46], 10.00th=[ 58], 20.00th=[ 72], 00:38:47.591 | 30.00th=[ 81], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 93], 00:38:47.591 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 125], 95.00th=[ 138], 00:38:47.591 | 99.00th=[ 163], 99.50th=[ 165], 99.90th=[ 188], 99.95th=[ 188], 00:38:47.591 | 99.99th=[ 188] 00:38:47.591 bw ( KiB/s): min= 512, max= 1152, per=4.48%, avg=718.20, stdev=141.94, samples=20 00:38:47.591 iops : min= 128, max= 288, avg=179.55, stdev=35.49, samples=20 00:38:47.591 lat (msec) : 50=8.55%, 100=67.65%, 250=23.81% 00:38:47.591 cpu : usr=41.94%, sys=0.71%, ctx=1054, majf=0, minf=1633 00:38:47.591 IO depths : 1=1.3%, 2=2.9%, 4=10.0%, 8=73.6%, 16=12.2%, 32=0.0%, >=64=0.0% 00:38:47.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.591 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.591 issued rwts: total=1802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.591 filename1: (groupid=0, jobs=1): err= 0: pid=120008: Thu Dec 5 10:10:34 2024 00:38:47.591 read: IOPS=145, BW=582KiB/s (596kB/s)(5824KiB/10007msec) 00:38:47.591 slat (usec): min=4, max=4039, avg=18.87, stdev=105.83 00:38:47.591 clat (msec): min=14, max=199, avg=109.83, stdev=31.13 00:38:47.591 lat (msec): min=14, max=199, avg=109.85, stdev=31.13 00:38:47.591 clat percentiles (msec): 00:38:47.591 | 1.00th=[ 15], 5.00th=[ 66], 10.00th=[ 79], 20.00th=[ 85], 00:38:47.591 | 30.00th=[ 88], 40.00th=[ 99], 50.00th=[ 111], 60.00th=[ 121], 00:38:47.591 | 70.00th=[ 128], 80.00th=[ 131], 90.00th=[ 144], 95.00th=[ 163], 00:38:47.591 | 99.00th=[ 201], 99.50th=[ 201], 99.90th=[ 201], 99.95th=[ 201], 00:38:47.591 | 99.99th=[ 201] 00:38:47.591 bw ( KiB/s): min= 384, max= 768, per=3.49%, avg=559.16, stdev=129.70, samples=19 00:38:47.591 iops : min= 96, max= 192, avg=139.74, stdev=32.40, samples=19 00:38:47.591 lat (msec) : 20=1.10%, 50=1.10%, 100=40.04%, 250=57.76% 00:38:47.591 cpu : usr=43.23%, sys=0.66%, ctx=1108, majf=0, minf=1633 00:38:47.591 IO depths : 1=3.6%, 2=8.3%, 4=20.5%, 8=58.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:38:47.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.591 complete : 0=0.0%, 4=92.7%, 8=1.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.591 issued rwts: total=1456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.591 filename1: (groupid=0, jobs=1): err= 0: pid=120009: Thu Dec 5 10:10:34 2024 00:38:47.591 read: IOPS=171, BW=687KiB/s (703kB/s)(6904KiB/10053msec) 00:38:47.591 slat (usec): min=5, max=7207, avg=29.38, stdev=268.92 00:38:47.591 clat (msec): min=21, max=204, avg=92.82, stdev=32.57 00:38:47.591 lat (msec): min=21, max=204, avg=92.85, stdev=32.59 00:38:47.591 clat percentiles (msec): 00:38:47.591 | 1.00th=[ 27], 5.00th=[ 47], 10.00th=[ 52], 20.00th=[ 60], 00:38:47.591 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 90], 60.00th=[ 105], 00:38:47.591 | 70.00th=[ 118], 80.00th=[ 125], 90.00th=[ 132], 95.00th=[ 144], 00:38:47.591 | 99.00th=[ 157], 99.50th=[ 192], 99.90th=[ 205], 99.95th=[ 205], 00:38:47.591 | 99.99th=[ 205] 00:38:47.591 bw ( KiB/s): min= 384, max= 1112, per=4.27%, avg=685.75, stdev=213.33, samples=20 00:38:47.591 iops : min= 96, max= 278, avg=171.40, stdev=53.27, samples=20 00:38:47.591 lat (msec) : 50=8.52%, 100=49.13%, 250=42.35% 00:38:47.591 cpu : usr=37.48%, sys=0.70%, ctx=1190, majf=0, minf=1635 00:38:47.591 IO depths : 1=1.7%, 2=3.6%, 4=11.8%, 8=71.5%, 16=11.5%, 32=0.0%, >=64=0.0% 00:38:47.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.591 complete : 0=0.0%, 4=90.4%, 8=4.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.591 issued rwts: total=1726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.591 filename1: (groupid=0, jobs=1): err= 0: pid=120010: Thu Dec 5 10:10:34 2024 00:38:47.591 read: IOPS=163, BW=654KiB/s (670kB/s)(6564KiB/10035msec) 00:38:47.591 slat (usec): min=4, max=4030, avg=18.18, stdev=99.48 00:38:47.591 clat (msec): min=17, max=218, avg=97.62, stdev=32.96 00:38:47.591 lat (msec): min=17, max=218, avg=97.63, stdev=32.96 00:38:47.591 clat percentiles (msec): 00:38:47.591 | 1.00th=[ 18], 5.00th=[ 45], 10.00th=[ 58], 20.00th=[ 74], 00:38:47.591 | 30.00th=[ 83], 40.00th=[ 87], 50.00th=[ 95], 60.00th=[ 109], 00:38:47.591 | 70.00th=[ 118], 80.00th=[ 125], 90.00th=[ 136], 95.00th=[ 155], 00:38:47.591 | 99.00th=[ 171], 99.50th=[ 197], 99.90th=[ 218], 99.95th=[ 218], 00:38:47.591 | 99.99th=[ 218] 00:38:47.591 bw ( KiB/s): min= 384, max= 1280, per=4.05%, avg=649.70, stdev=198.90, samples=20 00:38:47.591 iops : min= 96, max= 320, avg=162.40, stdev=49.73, samples=20 00:38:47.591 lat (msec) : 20=1.95%, 50=3.84%, 100=50.27%, 250=43.94% 00:38:47.591 cpu : usr=37.77%, sys=0.69%, ctx=1149, majf=0, minf=1635 00:38:47.591 IO depths : 1=3.2%, 2=6.9%, 4=17.6%, 8=62.8%, 16=9.5%, 32=0.0%, >=64=0.0% 00:38:47.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.591 complete : 0=0.0%, 4=91.8%, 8=2.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.591 issued rwts: total=1641,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.591 filename1: (groupid=0, jobs=1): err= 0: pid=120011: Thu Dec 5 10:10:34 2024 00:38:47.591 read: IOPS=160, BW=640KiB/s (656kB/s)(6436KiB/10049msec) 00:38:47.591 slat (usec): min=5, max=8032, avg=32.59, stdev=346.69 00:38:47.591 clat (msec): min=19, max=209, avg=99.71, stdev=34.95 00:38:47.591 lat (msec): min=19, max=209, avg=99.74, stdev=34.94 00:38:47.591 clat percentiles (msec): 00:38:47.591 | 1.00th=[ 29], 5.00th=[ 40], 10.00th=[ 59], 20.00th=[ 72], 00:38:47.591 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 96], 60.00th=[ 109], 00:38:47.591 | 70.00th=[ 121], 80.00th=[ 128], 90.00th=[ 142], 95.00th=[ 165], 00:38:47.591 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 211], 99.95th=[ 211], 00:38:47.591 | 99.99th=[ 211] 00:38:47.591 bw ( KiB/s): min= 384, max= 1080, per=3.97%, avg=636.00, stdev=168.54, samples=20 00:38:47.591 iops : min= 96, max= 270, avg=159.00, stdev=42.14, samples=20 00:38:47.591 lat (msec) : 20=0.06%, 50=7.52%, 100=46.67%, 250=45.74% 00:38:47.591 cpu : usr=34.31%, sys=0.66%, ctx=1046, majf=0, minf=1633 00:38:47.591 IO depths : 1=2.1%, 2=5.0%, 4=14.9%, 8=66.9%, 16=11.1%, 32=0.0%, >=64=0.0% 00:38:47.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.591 complete : 0=0.0%, 4=91.1%, 8=3.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.591 issued rwts: total=1609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.591 filename1: (groupid=0, jobs=1): err= 0: pid=120012: Thu Dec 5 10:10:34 2024 00:38:47.591 read: IOPS=156, BW=626KiB/s (641kB/s)(6288KiB/10039msec) 00:38:47.591 slat (usec): min=4, max=5189, avg=26.52, stdev=218.93 00:38:47.591 clat (msec): min=25, max=215, avg=101.81, stdev=30.58 00:38:47.591 lat (msec): min=25, max=215, avg=101.84, stdev=30.58 00:38:47.591 clat percentiles (msec): 00:38:47.591 | 1.00th=[ 26], 5.00th=[ 57], 10.00th=[ 66], 20.00th=[ 75], 00:38:47.591 | 30.00th=[ 82], 40.00th=[ 88], 50.00th=[ 102], 60.00th=[ 113], 00:38:47.591 | 70.00th=[ 120], 80.00th=[ 129], 90.00th=[ 142], 95.00th=[ 150], 00:38:47.591 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 215], 99.95th=[ 215], 00:38:47.591 | 99.99th=[ 215] 00:38:47.591 bw ( KiB/s): min= 440, max= 896, per=3.90%, avg=625.10, stdev=145.59, samples=20 00:38:47.591 iops : min= 110, max= 224, avg=156.25, stdev=36.38, samples=20 00:38:47.591 lat (msec) : 50=2.29%, 100=47.07%, 250=50.64% 00:38:47.591 cpu : usr=36.30%, sys=0.65%, ctx=1052, majf=0, minf=1633 00:38:47.591 IO depths : 1=3.0%, 2=6.6%, 4=16.4%, 8=64.2%, 16=9.8%, 32=0.0%, >=64=0.0% 00:38:47.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.591 complete : 0=0.0%, 4=91.8%, 8=2.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.591 issued rwts: total=1572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.591 filename1: (groupid=0, jobs=1): err= 0: pid=120013: Thu Dec 5 10:10:34 2024 00:38:47.591 read: IOPS=149, BW=597KiB/s (611kB/s)(5976KiB/10011msec) 00:38:47.591 slat (usec): min=4, max=4040, avg=20.94, stdev=144.71 00:38:47.591 clat (msec): min=6, max=217, avg=107.05, stdev=38.67 00:38:47.591 lat (msec): min=7, max=217, avg=107.07, stdev=38.67 00:38:47.591 clat percentiles (msec): 00:38:47.591 | 1.00th=[ 11], 5.00th=[ 47], 10.00th=[ 61], 20.00th=[ 79], 00:38:47.591 | 30.00th=[ 86], 40.00th=[ 96], 50.00th=[ 107], 60.00th=[ 120], 00:38:47.591 | 70.00th=[ 124], 80.00th=[ 131], 90.00th=[ 157], 95.00th=[ 188], 00:38:47.591 | 99.00th=[ 205], 99.50th=[ 207], 99.90th=[ 218], 99.95th=[ 218], 00:38:47.591 | 99.99th=[ 218] 00:38:47.591 bw ( KiB/s): min= 384, max= 944, per=3.59%, avg=575.11, stdev=158.29, samples=19 00:38:47.591 iops : min= 96, max= 236, avg=143.74, stdev=39.53, samples=19 00:38:47.591 lat (msec) : 10=0.33%, 20=1.81%, 50=3.15%, 100=40.43%, 250=54.28% 00:38:47.591 cpu : usr=36.92%, sys=0.75%, ctx=1030, majf=0, minf=1635 00:38:47.591 IO depths : 1=2.8%, 2=6.4%, 4=17.0%, 8=63.7%, 16=10.2%, 32=0.0%, >=64=0.0% 00:38:47.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.591 complete : 0=0.0%, 4=91.9%, 8=2.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.591 issued rwts: total=1494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.591 filename1: (groupid=0, jobs=1): err= 0: pid=120014: Thu Dec 5 10:10:34 2024 00:38:47.591 read: IOPS=189, BW=759KiB/s (777kB/s)(7628KiB/10056msec) 00:38:47.591 slat (usec): min=5, max=4028, avg=16.33, stdev=92.24 00:38:47.591 clat (msec): min=3, max=173, avg=84.23, stdev=30.97 00:38:47.591 lat (msec): min=3, max=173, avg=84.24, stdev=30.97 00:38:47.591 clat percentiles (msec): 00:38:47.592 | 1.00th=[ 10], 5.00th=[ 23], 10.00th=[ 45], 20.00th=[ 61], 00:38:47.592 | 30.00th=[ 71], 40.00th=[ 81], 50.00th=[ 86], 60.00th=[ 91], 00:38:47.592 | 70.00th=[ 97], 80.00th=[ 111], 90.00th=[ 124], 95.00th=[ 130], 00:38:47.592 | 99.00th=[ 148], 99.50th=[ 171], 99.90th=[ 174], 99.95th=[ 174], 00:38:47.592 | 99.99th=[ 174] 00:38:47.592 bw ( KiB/s): min= 512, max= 1952, per=4.71%, avg=755.60, stdev=298.46, samples=20 00:38:47.592 iops : min= 128, max= 488, avg=188.90, stdev=74.61, samples=20 00:38:47.592 lat (msec) : 4=0.84%, 10=0.84%, 20=2.52%, 50=8.55%, 100=59.88% 00:38:47.592 lat (msec) : 250=27.37% 00:38:47.592 cpu : usr=35.45%, sys=0.68%, ctx=1192, majf=0, minf=1634 00:38:47.592 IO depths : 1=1.8%, 2=3.9%, 4=11.8%, 8=71.1%, 16=11.4%, 32=0.0%, >=64=0.0% 00:38:47.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.592 complete : 0=0.0%, 4=90.5%, 8=4.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.592 issued rwts: total=1907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.592 filename1: (groupid=0, jobs=1): err= 0: pid=120015: Thu Dec 5 10:10:34 2024 00:38:47.592 read: IOPS=163, BW=655KiB/s (671kB/s)(6568KiB/10021msec) 00:38:47.592 slat (usec): min=4, max=8030, avg=21.70, stdev=217.68 00:38:47.592 clat (msec): min=24, max=190, avg=97.45, stdev=30.00 00:38:47.592 lat (msec): min=24, max=190, avg=97.47, stdev=30.00 00:38:47.592 clat percentiles (msec): 00:38:47.592 | 1.00th=[ 30], 5.00th=[ 52], 10.00th=[ 59], 20.00th=[ 74], 00:38:47.592 | 30.00th=[ 81], 40.00th=[ 85], 50.00th=[ 94], 60.00th=[ 103], 00:38:47.592 | 70.00th=[ 118], 80.00th=[ 128], 90.00th=[ 136], 95.00th=[ 144], 00:38:47.592 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 190], 99.95th=[ 190], 00:38:47.592 | 99.99th=[ 190] 00:38:47.592 bw ( KiB/s): min= 464, max= 992, per=3.97%, avg=637.47, stdev=146.38, samples=19 00:38:47.592 iops : min= 116, max= 248, avg=159.37, stdev=36.59, samples=19 00:38:47.592 lat (msec) : 50=4.87%, 100=51.46%, 250=43.67% 00:38:47.592 cpu : usr=35.96%, sys=0.60%, ctx=1044, majf=0, minf=1633 00:38:47.592 IO depths : 1=1.0%, 2=2.4%, 4=9.5%, 8=74.1%, 16=13.0%, 32=0.0%, >=64=0.0% 00:38:47.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.592 complete : 0=0.0%, 4=90.0%, 8=5.8%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.592 issued rwts: total=1642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.592 filename2: (groupid=0, jobs=1): err= 0: pid=120016: Thu Dec 5 10:10:34 2024 00:38:47.592 read: IOPS=147, BW=591KiB/s (605kB/s)(5912KiB/10010msec) 00:38:47.592 slat (usec): min=4, max=8031, avg=28.66, stdev=296.40 00:38:47.592 clat (msec): min=13, max=208, avg=108.15, stdev=32.13 00:38:47.592 lat (msec): min=13, max=208, avg=108.18, stdev=32.12 00:38:47.592 clat percentiles (msec): 00:38:47.592 | 1.00th=[ 15], 5.00th=[ 58], 10.00th=[ 67], 20.00th=[ 82], 00:38:47.592 | 30.00th=[ 89], 40.00th=[ 102], 50.00th=[ 115], 60.00th=[ 122], 00:38:47.592 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 144], 95.00th=[ 153], 00:38:47.592 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 209], 99.95th=[ 209], 00:38:47.592 | 99.99th=[ 209] 00:38:47.592 bw ( KiB/s): min= 384, max= 897, per=3.52%, avg=565.79, stdev=130.66, samples=19 00:38:47.592 iops : min= 96, max= 224, avg=141.37, stdev=32.61, samples=19 00:38:47.592 lat (msec) : 20=1.08%, 50=2.77%, 100=36.06%, 250=60.08% 00:38:47.592 cpu : usr=44.44%, sys=0.73%, ctx=1325, majf=0, minf=1635 00:38:47.592 IO depths : 1=3.3%, 2=7.6%, 4=19.4%, 8=60.2%, 16=9.5%, 32=0.0%, >=64=0.0% 00:38:47.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.592 complete : 0=0.0%, 4=92.6%, 8=2.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.592 issued rwts: total=1478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.592 filename2: (groupid=0, jobs=1): err= 0: pid=120017: Thu Dec 5 10:10:34 2024 00:38:47.592 read: IOPS=181, BW=728KiB/s (745kB/s)(7304KiB/10038msec) 00:38:47.592 slat (usec): min=6, max=8046, avg=25.49, stdev=281.58 00:38:47.592 clat (msec): min=11, max=209, avg=87.61, stdev=34.52 00:38:47.592 lat (msec): min=11, max=209, avg=87.63, stdev=34.53 00:38:47.592 clat percentiles (msec): 00:38:47.592 | 1.00th=[ 19], 5.00th=[ 29], 10.00th=[ 51], 20.00th=[ 58], 00:38:47.592 | 30.00th=[ 67], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 93], 00:38:47.592 | 70.00th=[ 107], 80.00th=[ 120], 90.00th=[ 131], 95.00th=[ 148], 00:38:47.592 | 99.00th=[ 167], 99.50th=[ 180], 99.90th=[ 211], 99.95th=[ 211], 00:38:47.592 | 99.99th=[ 211] 00:38:47.592 bw ( KiB/s): min= 384, max= 1280, per=4.54%, avg=728.40, stdev=243.39, samples=20 00:38:47.592 iops : min= 96, max= 320, avg=182.10, stdev=60.85, samples=20 00:38:47.592 lat (msec) : 20=1.75%, 50=8.32%, 100=57.17%, 250=32.75% 00:38:47.592 cpu : usr=36.33%, sys=0.71%, ctx=1063, majf=0, minf=1636 00:38:47.592 IO depths : 1=1.7%, 2=3.9%, 4=12.3%, 8=70.5%, 16=11.6%, 32=0.0%, >=64=0.0% 00:38:47.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.592 complete : 0=0.0%, 4=90.6%, 8=4.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.592 issued rwts: total=1826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.592 filename2: (groupid=0, jobs=1): err= 0: pid=120018: Thu Dec 5 10:10:34 2024 00:38:47.592 read: IOPS=146, BW=585KiB/s (599kB/s)(5848KiB/10003msec) 00:38:47.592 slat (usec): min=4, max=4062, avg=28.30, stdev=210.42 00:38:47.592 clat (msec): min=13, max=188, avg=109.22, stdev=32.18 00:38:47.592 lat (msec): min=13, max=188, avg=109.25, stdev=32.17 00:38:47.592 clat percentiles (msec): 00:38:47.592 | 1.00th=[ 15], 5.00th=[ 70], 10.00th=[ 75], 20.00th=[ 82], 00:38:47.592 | 30.00th=[ 88], 40.00th=[ 96], 50.00th=[ 111], 60.00th=[ 118], 00:38:47.592 | 70.00th=[ 128], 80.00th=[ 136], 90.00th=[ 150], 95.00th=[ 169], 00:38:47.592 | 99.00th=[ 184], 99.50th=[ 186], 99.90th=[ 188], 99.95th=[ 188], 00:38:47.592 | 99.99th=[ 188] 00:38:47.592 bw ( KiB/s): min= 384, max= 768, per=3.50%, avg=561.53, stdev=128.62, samples=19 00:38:47.592 iops : min= 96, max= 192, avg=140.32, stdev=32.15, samples=19 00:38:47.592 lat (msec) : 20=1.09%, 50=1.37%, 100=40.29%, 250=57.25% 00:38:47.592 cpu : usr=41.59%, sys=0.68%, ctx=1139, majf=0, minf=1635 00:38:47.592 IO depths : 1=3.1%, 2=7.5%, 4=18.7%, 8=61.2%, 16=9.5%, 32=0.0%, >=64=0.0% 00:38:47.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.592 complete : 0=0.0%, 4=92.5%, 8=1.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.592 issued rwts: total=1462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.592 filename2: (groupid=0, jobs=1): err= 0: pid=120019: Thu Dec 5 10:10:34 2024 00:38:47.592 read: IOPS=183, BW=736KiB/s (753kB/s)(7392KiB/10050msec) 00:38:47.592 slat (usec): min=8, max=8093, avg=27.24, stdev=323.39 00:38:47.592 clat (msec): min=30, max=183, avg=86.76, stdev=29.51 00:38:47.592 lat (msec): min=30, max=183, avg=86.79, stdev=29.51 00:38:47.592 clat percentiles (msec): 00:38:47.592 | 1.00th=[ 33], 5.00th=[ 47], 10.00th=[ 53], 20.00th=[ 61], 00:38:47.592 | 30.00th=[ 69], 40.00th=[ 75], 50.00th=[ 83], 60.00th=[ 89], 00:38:47.592 | 70.00th=[ 99], 80.00th=[ 121], 90.00th=[ 131], 95.00th=[ 136], 00:38:47.592 | 99.00th=[ 167], 99.50th=[ 167], 99.90th=[ 184], 99.95th=[ 184], 00:38:47.592 | 99.99th=[ 184] 00:38:47.592 bw ( KiB/s): min= 464, max= 1104, per=4.56%, avg=732.75, stdev=206.38, samples=20 00:38:47.592 iops : min= 116, max= 276, avg=183.15, stdev=51.64, samples=20 00:38:47.592 lat (msec) : 50=7.85%, 100=63.37%, 250=28.79% 00:38:47.592 cpu : usr=41.44%, sys=0.71%, ctx=1164, majf=0, minf=1636 00:38:47.592 IO depths : 1=1.0%, 2=2.3%, 4=8.8%, 8=75.3%, 16=12.6%, 32=0.0%, >=64=0.0% 00:38:47.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.592 complete : 0=0.0%, 4=89.8%, 8=5.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.592 issued rwts: total=1848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.592 filename2: (groupid=0, jobs=1): err= 0: pid=120020: Thu Dec 5 10:10:34 2024 00:38:47.592 read: IOPS=168, BW=676KiB/s (692kB/s)(6776KiB/10031msec) 00:38:47.592 slat (usec): min=5, max=8077, avg=25.39, stdev=276.18 00:38:47.592 clat (msec): min=33, max=228, avg=94.53, stdev=32.49 00:38:47.592 lat (msec): min=33, max=228, avg=94.56, stdev=32.49 00:38:47.592 clat percentiles (msec): 00:38:47.592 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 69], 00:38:47.592 | 30.00th=[ 75], 40.00th=[ 84], 50.00th=[ 86], 60.00th=[ 96], 00:38:47.592 | 70.00th=[ 109], 80.00th=[ 122], 90.00th=[ 132], 95.00th=[ 148], 00:38:47.592 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 230], 99.95th=[ 230], 00:38:47.592 | 99.99th=[ 230] 00:38:47.592 bw ( KiB/s): min= 384, max= 1024, per=4.18%, avg=671.20, stdev=179.03, samples=20 00:38:47.592 iops : min= 96, max= 256, avg=167.80, stdev=44.76, samples=20 00:38:47.592 lat (msec) : 50=7.91%, 100=55.08%, 250=37.01% 00:38:47.592 cpu : usr=32.53%, sys=0.64%, ctx=882, majf=0, minf=1635 00:38:47.592 IO depths : 1=1.1%, 2=2.4%, 4=9.3%, 8=74.6%, 16=12.6%, 32=0.0%, >=64=0.0% 00:38:47.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.592 complete : 0=0.0%, 4=90.0%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.592 issued rwts: total=1694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.592 filename2: (groupid=0, jobs=1): err= 0: pid=120021: Thu Dec 5 10:10:34 2024 00:38:47.592 read: IOPS=147, BW=590KiB/s (604kB/s)(5916KiB/10028msec) 00:38:47.592 slat (usec): min=8, max=11036, avg=30.59, stdev=357.37 00:38:47.592 clat (msec): min=18, max=235, avg=108.12, stdev=31.36 00:38:47.592 lat (msec): min=18, max=235, avg=108.15, stdev=31.36 00:38:47.592 clat percentiles (msec): 00:38:47.592 | 1.00th=[ 19], 5.00th=[ 62], 10.00th=[ 69], 20.00th=[ 81], 00:38:47.592 | 30.00th=[ 88], 40.00th=[ 100], 50.00th=[ 114], 60.00th=[ 121], 00:38:47.592 | 70.00th=[ 126], 80.00th=[ 131], 90.00th=[ 144], 95.00th=[ 157], 00:38:47.592 | 99.00th=[ 201], 99.50th=[ 201], 99.90th=[ 236], 99.95th=[ 236], 00:38:47.592 | 99.99th=[ 236] 00:38:47.592 bw ( KiB/s): min= 384, max= 890, per=3.67%, avg=588.50, stdev=142.04, samples=20 00:38:47.592 iops : min= 96, max= 222, avg=147.10, stdev=35.45, samples=20 00:38:47.592 lat (msec) : 20=1.08%, 50=1.76%, 100=38.07%, 250=59.09% 00:38:47.592 cpu : usr=37.23%, sys=0.75%, ctx=1330, majf=0, minf=1635 00:38:47.592 IO depths : 1=3.5%, 2=8.2%, 4=20.8%, 8=58.5%, 16=9.0%, 32=0.0%, >=64=0.0% 00:38:47.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.592 complete : 0=0.0%, 4=92.8%, 8=1.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.592 issued rwts: total=1479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.593 filename2: (groupid=0, jobs=1): err= 0: pid=120022: Thu Dec 5 10:10:34 2024 00:38:47.593 read: IOPS=144, BW=576KiB/s (590kB/s)(5788KiB/10041msec) 00:38:47.593 slat (usec): min=5, max=8050, avg=38.61, stdev=421.51 00:38:47.593 clat (msec): min=23, max=220, avg=110.54, stdev=34.60 00:38:47.593 lat (msec): min=23, max=220, avg=110.58, stdev=34.60 00:38:47.593 clat percentiles (msec): 00:38:47.593 | 1.00th=[ 24], 5.00th=[ 61], 10.00th=[ 70], 20.00th=[ 83], 00:38:47.593 | 30.00th=[ 88], 40.00th=[ 97], 50.00th=[ 109], 60.00th=[ 120], 00:38:47.593 | 70.00th=[ 128], 80.00th=[ 132], 90.00th=[ 157], 95.00th=[ 171], 00:38:47.593 | 99.00th=[ 209], 99.50th=[ 215], 99.90th=[ 222], 99.95th=[ 222], 00:38:47.593 | 99.99th=[ 222] 00:38:47.593 bw ( KiB/s): min= 384, max= 896, per=3.58%, avg=574.80, stdev=159.96, samples=20 00:38:47.593 iops : min= 96, max= 224, avg=143.70, stdev=39.99, samples=20 00:38:47.593 lat (msec) : 50=2.21%, 100=40.01%, 250=57.77% 00:38:47.593 cpu : usr=36.80%, sys=0.61%, ctx=904, majf=0, minf=1636 00:38:47.593 IO depths : 1=3.2%, 2=6.7%, 4=17.1%, 8=63.3%, 16=9.6%, 32=0.0%, >=64=0.0% 00:38:47.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.593 complete : 0=0.0%, 4=91.6%, 8=3.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.593 issued rwts: total=1447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.593 filename2: (groupid=0, jobs=1): err= 0: pid=120023: Thu Dec 5 10:10:34 2024 00:38:47.593 read: IOPS=152, BW=610KiB/s (625kB/s)(6104KiB/10008msec) 00:38:47.593 slat (usec): min=4, max=4038, avg=22.78, stdev=178.25 00:38:47.593 clat (msec): min=14, max=196, avg=104.74, stdev=30.74 00:38:47.593 lat (msec): min=14, max=196, avg=104.76, stdev=30.74 00:38:47.593 clat percentiles (msec): 00:38:47.593 | 1.00th=[ 15], 5.00th=[ 57], 10.00th=[ 73], 20.00th=[ 81], 00:38:47.593 | 30.00th=[ 88], 40.00th=[ 92], 50.00th=[ 105], 60.00th=[ 112], 00:38:47.593 | 70.00th=[ 123], 80.00th=[ 129], 90.00th=[ 140], 95.00th=[ 167], 00:38:47.593 | 99.00th=[ 171], 99.50th=[ 176], 99.90th=[ 197], 99.95th=[ 197], 00:38:47.593 | 99.99th=[ 197] 00:38:47.593 bw ( KiB/s): min= 384, max= 768, per=3.68%, avg=590.16, stdev=103.61, samples=19 00:38:47.593 iops : min= 96, max= 192, avg=147.47, stdev=25.87, samples=19 00:38:47.593 lat (msec) : 20=1.05%, 50=1.51%, 100=44.89%, 250=52.56% 00:38:47.593 cpu : usr=42.37%, sys=0.65%, ctx=1334, majf=0, minf=1635 00:38:47.593 IO depths : 1=2.7%, 2=5.9%, 4=15.5%, 8=65.7%, 16=10.2%, 32=0.0%, >=64=0.0% 00:38:47.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.593 complete : 0=0.0%, 4=91.6%, 8=3.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.593 issued rwts: total=1526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.593 00:38:47.593 Run status group 0 (all jobs): 00:38:47.593 READ: bw=15.7MiB/s (16.4MB/s), 576KiB/s-975KiB/s (590kB/s-998kB/s), io=158MiB (165MB), run=10003-10067msec 00:38:47.852 ----------------------------------------------------- 00:38:47.852 Suppressions used: 00:38:47.852 count bytes template 00:38:47.852 45 402 /usr/src/fio/parse.c 00:38:47.852 1 8 libtcmalloc_minimal.so 00:38:47.852 1 904 libcrypto.so 00:38:47.852 ----------------------------------------------------- 00:38:47.852 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.852 bdev_null0 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.852 [2024-12-05 10:10:35.356699] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.852 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.853 bdev_null1 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:47.853 { 00:38:47.853 "params": { 00:38:47.853 "name": "Nvme$subsystem", 00:38:47.853 "trtype": "$TEST_TRANSPORT", 00:38:47.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:47.853 "adrfam": "ipv4", 00:38:47.853 "trsvcid": "$NVMF_PORT", 00:38:47.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:47.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:47.853 "hdgst": ${hdgst:-false}, 00:38:47.853 "ddgst": ${ddgst:-false} 00:38:47.853 }, 00:38:47.853 "method": "bdev_nvme_attach_controller" 00:38:47.853 } 00:38:47.853 EOF 00:38:47.853 )") 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:47.853 { 00:38:47.853 "params": { 00:38:47.853 "name": "Nvme$subsystem", 00:38:47.853 "trtype": "$TEST_TRANSPORT", 00:38:47.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:47.853 "adrfam": "ipv4", 00:38:47.853 "trsvcid": "$NVMF_PORT", 00:38:47.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:47.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:47.853 "hdgst": ${hdgst:-false}, 00:38:47.853 "ddgst": ${ddgst:-false} 00:38:47.853 }, 00:38:47.853 "method": "bdev_nvme_attach_controller" 00:38:47.853 } 00:38:47.853 EOF 00:38:47.853 )") 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:47.853 "params": { 00:38:47.853 "name": "Nvme0", 00:38:47.853 "trtype": "tcp", 00:38:47.853 "traddr": "10.0.0.3", 00:38:47.853 "adrfam": "ipv4", 00:38:47.853 "trsvcid": "4420", 00:38:47.853 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:47.853 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:47.853 "hdgst": false, 00:38:47.853 "ddgst": false 00:38:47.853 }, 00:38:47.853 "method": "bdev_nvme_attach_controller" 00:38:47.853 },{ 00:38:47.853 "params": { 00:38:47.853 "name": "Nvme1", 00:38:47.853 "trtype": "tcp", 00:38:47.853 "traddr": "10.0.0.3", 00:38:47.853 "adrfam": "ipv4", 00:38:47.853 "trsvcid": "4420", 00:38:47.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:47.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:47.853 "hdgst": false, 00:38:47.853 "ddgst": false 00:38:47.853 }, 00:38:47.853 "method": "bdev_nvme_attach_controller" 00:38:47.853 }' 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:47.853 10:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:48.112 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:48.112 ... 00:38:48.112 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:48.112 ... 00:38:48.112 fio-3.35 00:38:48.112 Starting 4 threads 00:38:54.702 00:38:54.702 filename0: (groupid=0, jobs=1): err= 0: pid=120149: Thu Dec 5 10:10:41 2024 00:38:54.702 read: IOPS=1826, BW=14.3MiB/s (15.0MB/s)(71.4MiB/5002msec) 00:38:54.702 slat (nsec): min=6549, max=75826, avg=11120.11, stdev=6754.88 00:38:54.702 clat (usec): min=2299, max=7597, avg=4318.54, stdev=202.54 00:38:54.702 lat (usec): min=2317, max=7605, avg=4329.66, stdev=203.43 00:38:54.702 clat percentiles (usec): 00:38:54.702 | 1.00th=[ 4047], 5.00th=[ 4113], 10.00th=[ 4146], 20.00th=[ 4228], 00:38:54.702 | 30.00th=[ 4228], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4293], 00:38:54.702 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4621], 00:38:54.702 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 7177], 99.95th=[ 7177], 00:38:54.702 | 99.99th=[ 7570] 00:38:54.702 bw ( KiB/s): min=13723, max=14976, per=24.94%, avg=14580.78, stdev=409.86, samples=9 00:38:54.702 iops : min= 1715, max= 1872, avg=1822.56, stdev=51.33, samples=9 00:38:54.702 lat (msec) : 4=0.27%, 10=99.73% 00:38:54.702 cpu : usr=95.10%, sys=3.76%, ctx=8, majf=0, minf=1636 00:38:54.702 IO depths : 1=12.2%, 2=25.0%, 4=50.0%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:54.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.702 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.702 issued rwts: total=9136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:54.702 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:54.702 filename0: (groupid=0, jobs=1): err= 0: pid=120150: Thu Dec 5 10:10:41 2024 00:38:54.702 read: IOPS=1828, BW=14.3MiB/s (15.0MB/s)(71.4MiB/5002msec) 00:38:54.702 slat (usec): min=6, max=105, avg=16.04, stdev= 6.12 00:38:54.702 clat (usec): min=1264, max=7040, avg=4298.00, stdev=235.53 00:38:54.702 lat (usec): min=1271, max=7053, avg=4314.04, stdev=236.73 00:38:54.702 clat percentiles (usec): 00:38:54.702 | 1.00th=[ 4015], 5.00th=[ 4113], 10.00th=[ 4146], 20.00th=[ 4178], 00:38:54.702 | 30.00th=[ 4228], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4293], 00:38:54.702 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4490], 95.00th=[ 4621], 00:38:54.702 | 99.00th=[ 4948], 99.50th=[ 5276], 99.90th=[ 6718], 99.95th=[ 6783], 00:38:54.702 | 99.99th=[ 7046] 00:38:54.702 bw ( KiB/s): min=13723, max=14976, per=24.94%, avg=14580.78, stdev=391.40, samples=9 00:38:54.702 iops : min= 1715, max= 1872, avg=1822.56, stdev=49.03, samples=9 00:38:54.702 lat (msec) : 2=0.09%, 4=0.56%, 10=99.35% 00:38:54.702 cpu : usr=95.00%, sys=3.70%, ctx=9, majf=0, minf=1636 00:38:54.702 IO depths : 1=11.6%, 2=25.0%, 4=50.0%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:54.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.702 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.702 issued rwts: total=9144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:54.702 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:54.702 filename1: (groupid=0, jobs=1): err= 0: pid=120151: Thu Dec 5 10:10:41 2024 00:38:54.702 read: IOPS=1826, BW=14.3MiB/s (15.0MB/s)(71.4MiB/5002msec) 00:38:54.702 slat (nsec): min=3829, max=73580, avg=10958.88, stdev=6532.27 00:38:54.702 clat (usec): min=2998, max=6354, avg=4318.84, stdev=192.48 00:38:54.702 lat (usec): min=3013, max=6375, avg=4329.80, stdev=192.98 00:38:54.702 clat percentiles (usec): 00:38:54.702 | 1.00th=[ 4047], 5.00th=[ 4113], 10.00th=[ 4146], 20.00th=[ 4228], 00:38:54.702 | 30.00th=[ 4228], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4293], 00:38:54.702 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4621], 00:38:54.702 | 99.00th=[ 4948], 99.50th=[ 5211], 99.90th=[ 5669], 99.95th=[ 6325], 00:38:54.702 | 99.99th=[ 6325] 00:38:54.702 bw ( KiB/s): min=13824, max=14976, per=24.96%, avg=14592.00, stdev=350.54, samples=9 00:38:54.702 iops : min= 1728, max= 1872, avg=1824.00, stdev=43.82, samples=9 00:38:54.702 lat (msec) : 4=0.24%, 10=99.76% 00:38:54.702 cpu : usr=94.72%, sys=3.90%, ctx=8, majf=0, minf=1636 00:38:54.702 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:54.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.702 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.702 issued rwts: total=9136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:54.702 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:54.702 filename1: (groupid=0, jobs=1): err= 0: pid=120152: Thu Dec 5 10:10:41 2024 00:38:54.702 read: IOPS=1826, BW=14.3MiB/s (15.0MB/s)(71.4MiB/5001msec) 00:38:54.702 slat (nsec): min=6296, max=86548, avg=17523.12, stdev=6694.56 00:38:54.702 clat (usec): min=2259, max=7743, avg=4296.39, stdev=210.45 00:38:54.702 lat (usec): min=2273, max=7767, avg=4313.91, stdev=211.83 00:38:54.702 clat percentiles (usec): 00:38:54.702 | 1.00th=[ 4047], 5.00th=[ 4113], 10.00th=[ 4146], 20.00th=[ 4178], 00:38:54.702 | 30.00th=[ 4228], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4293], 00:38:54.702 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4490], 95.00th=[ 4621], 00:38:54.702 | 99.00th=[ 4948], 99.50th=[ 5211], 99.90th=[ 5473], 99.95th=[ 7701], 00:38:54.702 | 99.99th=[ 7767] 00:38:54.702 bw ( KiB/s): min=13696, max=14976, per=24.94%, avg=14577.78, stdev=396.82, samples=9 00:38:54.702 iops : min= 1712, max= 1872, avg=1822.22, stdev=49.60, samples=9 00:38:54.702 lat (msec) : 4=0.42%, 10=99.58% 00:38:54.702 cpu : usr=93.88%, sys=4.22%, ctx=12, majf=0, minf=1634 00:38:54.702 IO depths : 1=9.4%, 2=25.0%, 4=50.0%, 8=15.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:54.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.702 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.702 issued rwts: total=9136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:54.702 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:54.702 00:38:54.702 Run status group 0 (all jobs): 00:38:54.702 READ: bw=57.1MiB/s (59.9MB/s), 14.3MiB/s-14.3MiB/s (15.0MB/s-15.0MB/s), io=286MiB (299MB), run=5001-5002msec 00:38:55.297 ----------------------------------------------------- 00:38:55.297 Suppressions used: 00:38:55.297 count bytes template 00:38:55.297 6 52 /usr/src/fio/parse.c 00:38:55.297 1 8 libtcmalloc_minimal.so 00:38:55.297 1 904 libcrypto.so 00:38:55.297 ----------------------------------------------------- 00:38:55.297 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.297 10:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:55.297 ************************************ 00:38:55.297 END TEST fio_dif_rand_params 00:38:55.297 ************************************ 00:38:55.298 10:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.298 00:38:55.298 real 0m27.570s 00:38:55.298 user 2m11.880s 00:38:55.298 sys 0m4.380s 00:38:55.298 10:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:55.298 10:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:55.298 10:10:42 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:55.298 10:10:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:55.298 10:10:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:55.298 10:10:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:55.298 ************************************ 00:38:55.298 START TEST fio_dif_digest 00:38:55.298 ************************************ 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:55.298 bdev_null0 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:55.298 [2024-12-05 10:10:42.855832] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:55.298 { 00:38:55.298 "params": { 00:38:55.298 "name": "Nvme$subsystem", 00:38:55.298 "trtype": "$TEST_TRANSPORT", 00:38:55.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:55.298 "adrfam": "ipv4", 00:38:55.298 "trsvcid": "$NVMF_PORT", 00:38:55.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:55.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:55.298 "hdgst": ${hdgst:-false}, 00:38:55.298 "ddgst": ${ddgst:-false} 00:38:55.298 }, 00:38:55.298 "method": "bdev_nvme_attach_controller" 00:38:55.298 } 00:38:55.298 EOF 00:38:55.298 )") 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:55.298 "params": { 00:38:55.298 "name": "Nvme0", 00:38:55.298 "trtype": "tcp", 00:38:55.298 "traddr": "10.0.0.3", 00:38:55.298 "adrfam": "ipv4", 00:38:55.298 "trsvcid": "4420", 00:38:55.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:55.298 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:55.298 "hdgst": true, 00:38:55.298 "ddgst": true 00:38:55.298 }, 00:38:55.298 "method": "bdev_nvme_attach_controller" 00:38:55.298 }' 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:55.298 10:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:55.557 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:55.557 ... 00:38:55.557 fio-3.35 00:38:55.557 Starting 3 threads 00:39:07.764 00:39:07.764 filename0: (groupid=0, jobs=1): err= 0: pid=120263: Thu Dec 5 10:10:54 2024 00:39:07.764 read: IOPS=235, BW=29.4MiB/s (30.8MB/s)(296MiB/10046msec) 00:39:07.764 slat (usec): min=6, max=102, avg=19.47, stdev= 7.18 00:39:07.764 clat (usec): min=9476, max=54050, avg=12705.97, stdev=4048.85 00:39:07.764 lat (usec): min=9491, max=54099, avg=12725.44, stdev=4048.90 00:39:07.764 clat percentiles (usec): 00:39:07.764 | 1.00th=[10159], 5.00th=[10814], 10.00th=[11076], 20.00th=[11469], 00:39:07.764 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12518], 00:39:07.764 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13435], 95.00th=[13960], 00:39:07.764 | 99.00th=[17433], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:39:07.764 | 99.99th=[54264] 00:39:07.764 bw ( KiB/s): min=26112, max=31744, per=38.30%, avg=30233.60, stdev=1526.77, samples=20 00:39:07.764 iops : min= 204, max= 248, avg=236.20, stdev=11.93, samples=20 00:39:07.764 lat (msec) : 10=0.59%, 20=98.43%, 50=0.08%, 100=0.89% 00:39:07.764 cpu : usr=93.20%, sys=5.02%, ctx=28, majf=0, minf=1636 00:39:07.764 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:07.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.764 issued rwts: total=2364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:07.764 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:07.764 filename0: (groupid=0, jobs=1): err= 0: pid=120264: Thu Dec 5 10:10:54 2024 00:39:07.764 read: IOPS=184, BW=23.1MiB/s (24.2MB/s)(231MiB/10004msec) 00:39:07.764 slat (nsec): min=6612, max=70234, avg=22353.83, stdev=5810.43 00:39:07.764 clat (usec): min=3135, max=21004, avg=16193.98, stdev=1516.61 00:39:07.764 lat (usec): min=3157, max=21028, avg=16216.33, stdev=1516.85 00:39:07.764 clat percentiles (usec): 00:39:07.764 | 1.00th=[ 9896], 5.00th=[13960], 10.00th=[15270], 20.00th=[15664], 00:39:07.764 | 30.00th=[15926], 40.00th=[16188], 50.00th=[16319], 60.00th=[16581], 00:39:07.764 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17695], 95.00th=[17957], 00:39:07.764 | 99.00th=[18744], 99.50th=[19268], 99.90th=[20055], 99.95th=[21103], 00:39:07.764 | 99.99th=[21103] 00:39:07.764 bw ( KiB/s): min=23040, max=25600, per=29.92%, avg=23619.37, stdev=631.94, samples=19 00:39:07.764 iops : min= 180, max= 200, avg=184.53, stdev= 4.94, samples=19 00:39:07.764 lat (msec) : 4=0.05%, 10=1.03%, 20=98.76%, 50=0.16% 00:39:07.764 cpu : usr=94.42%, sys=4.13%, ctx=23, majf=0, minf=1634 00:39:07.764 IO depths : 1=2.8%, 2=97.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:07.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.764 issued rwts: total=1850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:07.764 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:07.764 filename0: (groupid=0, jobs=1): err= 0: pid=120265: Thu Dec 5 10:10:54 2024 00:39:07.764 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(248MiB/10042msec) 00:39:07.764 slat (usec): min=7, max=513, avg=23.62, stdev=13.34 00:39:07.764 clat (usec): min=7701, max=55608, avg=15158.72, stdev=2012.26 00:39:07.764 lat (usec): min=7723, max=55618, avg=15182.33, stdev=2011.81 00:39:07.764 clat percentiles (usec): 00:39:07.764 | 1.00th=[ 8848], 5.00th=[12387], 10.00th=[13698], 20.00th=[14353], 00:39:07.764 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:39:07.764 | 70.00th=[15926], 80.00th=[16188], 90.00th=[16581], 95.00th=[16909], 00:39:07.764 | 99.00th=[17695], 99.50th=[18220], 99.90th=[50070], 99.95th=[55837], 00:39:07.764 | 99.99th=[55837] 00:39:07.764 bw ( KiB/s): min=24320, max=27904, per=32.09%, avg=25333.70, stdev=828.51, samples=20 00:39:07.764 iops : min= 190, max= 218, avg=197.90, stdev= 6.47, samples=20 00:39:07.764 lat (msec) : 10=3.63%, 20=96.16%, 50=0.15%, 100=0.05% 00:39:07.764 cpu : usr=91.33%, sys=6.05%, ctx=177, majf=0, minf=1636 00:39:07.764 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:07.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.764 issued rwts: total=1981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:07.764 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:07.764 00:39:07.764 Run status group 0 (all jobs): 00:39:07.764 READ: bw=77.1MiB/s (80.8MB/s), 23.1MiB/s-29.4MiB/s (24.2MB/s-30.8MB/s), io=774MiB (812MB), run=10004-10046msec 00:39:07.764 ----------------------------------------------------- 00:39:07.764 Suppressions used: 00:39:07.764 count bytes template 00:39:07.764 5 44 /usr/src/fio/parse.c 00:39:07.764 1 8 libtcmalloc_minimal.so 00:39:07.764 1 904 libcrypto.so 00:39:07.764 ----------------------------------------------------- 00:39:07.764 00:39:07.764 10:10:55 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:39:07.764 10:10:55 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:39:07.764 10:10:55 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:39:07.764 10:10:55 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:07.764 10:10:55 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:39:07.764 10:10:55 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:07.764 10:10:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:07.764 10:10:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:07.764 10:10:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:07.764 10:10:55 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:07.764 10:10:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:07.764 10:10:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:07.764 ************************************ 00:39:07.764 END TEST fio_dif_digest 00:39:07.764 ************************************ 00:39:07.764 10:10:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:07.764 00:39:07.764 real 0m12.502s 00:39:07.764 user 0m30.003s 00:39:07.764 sys 0m1.952s 00:39:07.764 10:10:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:07.764 10:10:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:07.764 10:10:55 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:39:07.764 10:10:55 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:39:07.764 10:10:55 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:07.764 10:10:55 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:39:08.021 10:10:55 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:08.021 10:10:55 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:39:08.021 10:10:55 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:08.021 10:10:55 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:08.021 rmmod nvme_tcp 00:39:08.021 rmmod nvme_fabrics 00:39:08.021 rmmod nvme_keyring 00:39:08.021 10:10:55 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:08.021 10:10:55 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:39:08.021 10:10:55 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:39:08.021 10:10:55 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 119503 ']' 00:39:08.021 10:10:55 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 119503 00:39:08.022 10:10:55 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 119503 ']' 00:39:08.022 10:10:55 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 119503 00:39:08.022 10:10:55 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:39:08.022 10:10:55 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:08.022 10:10:55 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 119503 00:39:08.022 killing process with pid 119503 00:39:08.022 10:10:55 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:08.022 10:10:55 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:08.022 10:10:55 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 119503' 00:39:08.022 10:10:55 nvmf_dif -- common/autotest_common.sh@973 -- # kill 119503 00:39:08.022 10:10:55 nvmf_dif -- common/autotest_common.sh@978 -- # wait 119503 00:39:08.957 10:10:56 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:08.957 10:10:56 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:39:09.216 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:09.216 Waiting for block devices as requested 00:39:09.216 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:39:09.476 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:39:09.476 10:10:56 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:09.476 10:10:56 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:09.476 10:10:56 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:39:09.476 10:10:56 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:39:09.476 10:10:56 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:09.476 10:10:56 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:39:09.476 10:10:57 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:09.476 10:10:57 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:39:09.476 10:10:57 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:39:09.476 10:10:57 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:39:09.476 10:10:57 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:39:09.476 10:10:57 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:39:09.476 10:10:57 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:39:09.476 10:10:57 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:39:09.476 10:10:57 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:39:09.476 10:10:57 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:39:09.476 10:10:57 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:39:09.736 10:10:57 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:39:09.736 10:10:57 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:39:09.736 10:10:57 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:09.736 10:10:57 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:09.736 10:10:57 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:39:09.736 10:10:57 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:09.736 10:10:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:09.736 10:10:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:09.736 10:10:57 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:39:09.736 00:39:09.736 real 1m9.231s 00:39:09.736 user 4m10.526s 00:39:09.736 sys 0m13.913s 00:39:09.736 ************************************ 00:39:09.736 END TEST nvmf_dif 00:39:09.736 ************************************ 00:39:09.736 10:10:57 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:09.736 10:10:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:09.736 10:10:57 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:09.736 10:10:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:09.736 10:10:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:09.736 10:10:57 -- common/autotest_common.sh@10 -- # set +x 00:39:09.736 ************************************ 00:39:09.736 START TEST nvmf_abort_qd_sizes 00:39:09.736 ************************************ 00:39:09.736 10:10:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:09.736 * Looking for test storage... 00:39:09.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:09.995 10:10:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:09.995 10:10:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:39:09.995 10:10:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:09.995 10:10:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:09.995 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:09.995 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:09.995 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:09.995 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:39:09.995 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:39:09.995 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:39:09.995 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:39:09.995 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:39:09.995 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:39:09.995 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:39:09.995 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:09.995 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:39:09.995 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:39:09.995 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:09.995 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:09.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:09.996 --rc genhtml_branch_coverage=1 00:39:09.996 --rc genhtml_function_coverage=1 00:39:09.996 --rc genhtml_legend=1 00:39:09.996 --rc geninfo_all_blocks=1 00:39:09.996 --rc geninfo_unexecuted_blocks=1 00:39:09.996 00:39:09.996 ' 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:09.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:09.996 --rc genhtml_branch_coverage=1 00:39:09.996 --rc genhtml_function_coverage=1 00:39:09.996 --rc genhtml_legend=1 00:39:09.996 --rc geninfo_all_blocks=1 00:39:09.996 --rc geninfo_unexecuted_blocks=1 00:39:09.996 00:39:09.996 ' 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:09.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:09.996 --rc genhtml_branch_coverage=1 00:39:09.996 --rc genhtml_function_coverage=1 00:39:09.996 --rc genhtml_legend=1 00:39:09.996 --rc geninfo_all_blocks=1 00:39:09.996 --rc geninfo_unexecuted_blocks=1 00:39:09.996 00:39:09.996 ' 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:09.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:09.996 --rc genhtml_branch_coverage=1 00:39:09.996 --rc genhtml_function_coverage=1 00:39:09.996 --rc genhtml_legend=1 00:39:09.996 --rc geninfo_all_blocks=1 00:39:09.996 --rc geninfo_unexecuted_blocks=1 00:39:09.996 00:39:09.996 ' 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:09.996 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:09.996 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:39:09.997 Cannot find device "nvmf_init_br" 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:39:09.997 Cannot find device "nvmf_init_br2" 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:39:09.997 Cannot find device "nvmf_tgt_br" 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:39:09.997 Cannot find device "nvmf_tgt_br2" 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:39:09.997 Cannot find device "nvmf_init_br" 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:39:09.997 Cannot find device "nvmf_init_br2" 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:39:09.997 Cannot find device "nvmf_tgt_br" 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:39:09.997 Cannot find device "nvmf_tgt_br2" 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:39:09.997 Cannot find device "nvmf_br" 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:39:09.997 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:39:10.256 Cannot find device "nvmf_init_if" 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:39:10.256 Cannot find device "nvmf_init_if2" 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:10.256 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:10.256 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:10.256 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:39:10.257 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:39:10.257 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:39:10.257 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:10.257 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:39:10.257 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:39:10.257 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:10.257 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:39:10.257 00:39:10.257 --- 10.0.0.3 ping statistics --- 00:39:10.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.257 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:39:10.257 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:39:10.257 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:39:10.257 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:39:10.257 00:39:10.257 --- 10.0.0.4 ping statistics --- 00:39:10.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.257 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:39:10.257 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:10.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:10.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:39:10.257 00:39:10.257 --- 10.0.0.1 ping statistics --- 00:39:10.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.257 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:39:10.257 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:39:10.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:10.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:39:10.257 00:39:10.257 --- 10.0.0.2 ping statistics --- 00:39:10.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.257 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:39:10.257 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:10.257 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:39:10.257 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:39:10.257 10:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:39:11.193 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:11.193 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:39:11.193 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:39:11.193 10:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:11.193 10:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:11.193 10:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:11.193 10:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:11.193 10:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:11.193 10:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:11.193 10:10:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:11.193 10:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:11.193 10:10:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:11.193 10:10:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:11.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:11.193 10:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=120919 00:39:11.193 10:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:11.193 10:10:58 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 120919 00:39:11.193 10:10:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 120919 ']' 00:39:11.193 10:10:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:11.193 10:10:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:11.193 10:10:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:11.193 10:10:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:11.193 10:10:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:11.452 [2024-12-05 10:10:58.873497] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:39:11.452 [2024-12-05 10:10:58.873959] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:11.452 [2024-12-05 10:10:59.063012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:11.711 [2024-12-05 10:10:59.171105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:11.711 [2024-12-05 10:10:59.171163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:11.711 [2024-12-05 10:10:59.171181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:11.711 [2024-12-05 10:10:59.171192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:11.711 [2024-12-05 10:10:59.171203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:11.711 [2024-12-05 10:10:59.173219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:11.711 [2024-12-05 10:10:59.173377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:11.711 [2024-12-05 10:10:59.174115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:11.711 [2024-12-05 10:10:59.174133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:12.278 10:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:12.279 10:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:39:12.279 10:10:59 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:12.279 10:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:12.279 10:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:12.538 10:10:59 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:12.538 10:10:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:12.538 10:10:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:12.538 10:10:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:12.538 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:39:12.538 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:39:12.538 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:39:12.538 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:39:12.538 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:39:12.538 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:39:12.538 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:39:12.538 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:39:12.538 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:39:12.538 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:39:12.538 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:39:12.538 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:39:12.538 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:39:12.538 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:12.539 10:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:12.539 ************************************ 00:39:12.539 START TEST spdk_target_abort 00:39:12.539 ************************************ 00:39:12.539 10:10:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:39:12.539 10:10:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:12.539 10:10:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:39:12.539 10:10:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.539 10:10:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:12.539 spdk_targetn1 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:12.539 [2024-12-05 10:11:00.063810] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:12.539 [2024-12-05 10:11:00.116890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:12.539 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:12.540 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:12.540 10:11:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:16.726 Initializing NVMe Controllers 00:39:16.726 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:39:16.726 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:16.726 Initialization complete. Launching workers. 00:39:16.726 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8791, failed: 0 00:39:16.726 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1172, failed to submit 7619 00:39:16.726 success 792, unsuccessful 380, failed 0 00:39:16.726 10:11:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:16.726 10:11:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:20.027 Initializing NVMe Controllers 00:39:20.027 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:39:20.027 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:20.027 Initialization complete. Launching workers. 00:39:20.027 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6007, failed: 0 00:39:20.027 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1254, failed to submit 4753 00:39:20.027 success 229, unsuccessful 1025, failed 0 00:39:20.027 10:11:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:20.027 10:11:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:23.316 Initializing NVMe Controllers 00:39:23.316 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:39:23.316 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:23.316 Initialization complete. Launching workers. 00:39:23.316 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28200, failed: 0 00:39:23.316 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2666, failed to submit 25534 00:39:23.316 success 353, unsuccessful 2313, failed 0 00:39:23.316 10:11:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:23.316 10:11:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.316 10:11:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:23.316 10:11:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.316 10:11:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:23.316 10:11:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.316 10:11:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:23.316 10:11:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.316 10:11:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 120919 00:39:23.316 10:11:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 120919 ']' 00:39:23.316 10:11:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 120919 00:39:23.316 10:11:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:39:23.316 10:11:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:23.316 10:11:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120919 00:39:23.316 killing process with pid 120919 00:39:23.316 10:11:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:23.316 10:11:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:23.316 10:11:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120919' 00:39:23.316 10:11:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 120919 00:39:23.316 10:11:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 120919 00:39:24.254 ************************************ 00:39:24.254 END TEST spdk_target_abort 00:39:24.254 ************************************ 00:39:24.254 00:39:24.254 real 0m11.596s 00:39:24.254 user 0m46.825s 00:39:24.254 sys 0m1.839s 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.254 10:11:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:24.254 10:11:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:24.254 10:11:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:24.254 10:11:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:24.254 ************************************ 00:39:24.254 START TEST kernel_target_abort 00:39:24.254 ************************************ 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:24.254 10:11:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:39:24.514 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:24.514 Waiting for block devices as requested 00:39:24.514 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:39:24.773 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:39:25.032 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:39:25.032 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:25.032 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:39:25.032 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:39:25.032 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:25.032 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:39:25.033 No valid GPT data, bailing 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:39:25.033 No valid GPT data, bailing 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:39:25.033 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:39:25.293 No valid GPT data, bailing 00:39:25.293 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:39:25.293 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:39:25.293 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:39:25.294 No valid GPT data, bailing 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 --hostid=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 -a 10.0.0.1 -t tcp -s 4420 00:39:25.294 00:39:25.294 Discovery Log Number of Records 2, Generation counter 2 00:39:25.294 =====Discovery Log Entry 0====== 00:39:25.294 trtype: tcp 00:39:25.294 adrfam: ipv4 00:39:25.294 subtype: current discovery subsystem 00:39:25.294 treq: not specified, sq flow control disable supported 00:39:25.294 portid: 1 00:39:25.294 trsvcid: 4420 00:39:25.294 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:25.294 traddr: 10.0.0.1 00:39:25.294 eflags: none 00:39:25.294 sectype: none 00:39:25.294 =====Discovery Log Entry 1====== 00:39:25.294 trtype: tcp 00:39:25.294 adrfam: ipv4 00:39:25.294 subtype: nvme subsystem 00:39:25.294 treq: not specified, sq flow control disable supported 00:39:25.294 portid: 1 00:39:25.294 trsvcid: 4420 00:39:25.294 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:25.294 traddr: 10.0.0.1 00:39:25.294 eflags: none 00:39:25.294 sectype: none 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:25.294 10:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:28.581 Initializing NVMe Controllers 00:39:28.581 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:28.581 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:28.581 Initialization complete. Launching workers. 00:39:28.581 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 24556, failed: 0 00:39:28.581 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24556, failed to submit 0 00:39:28.581 success 0, unsuccessful 24556, failed 0 00:39:28.581 10:11:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:28.581 10:11:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:31.866 Initializing NVMe Controllers 00:39:31.866 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:31.866 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:31.866 Initialization complete. Launching workers. 00:39:31.866 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66334, failed: 0 00:39:31.866 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29129, failed to submit 37205 00:39:31.866 success 0, unsuccessful 29129, failed 0 00:39:31.866 10:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:31.866 10:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:35.189 Initializing NVMe Controllers 00:39:35.189 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:35.189 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:35.189 Initialization complete. Launching workers. 00:39:35.189 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 81025, failed: 0 00:39:35.189 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20240, failed to submit 60785 00:39:35.189 success 0, unsuccessful 20240, failed 0 00:39:35.189 10:11:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:35.189 10:11:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:35.189 10:11:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:39:35.189 10:11:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:35.189 10:11:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:35.189 10:11:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:35.189 10:11:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:35.189 10:11:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:39:35.189 10:11:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:39:35.190 10:11:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:39:36.126 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:37.063 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:39:37.063 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:39:37.322 00:39:37.322 real 0m13.069s 00:39:37.322 user 0m6.540s 00:39:37.322 sys 0m4.165s 00:39:37.322 10:11:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:37.322 10:11:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:37.322 ************************************ 00:39:37.322 END TEST kernel_target_abort 00:39:37.322 ************************************ 00:39:37.322 10:11:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:37.322 10:11:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:37.322 10:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:37.322 10:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:39:37.322 10:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:37.322 10:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:39:37.322 10:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:37.322 10:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:37.322 rmmod nvme_tcp 00:39:37.322 rmmod nvme_fabrics 00:39:37.322 rmmod nvme_keyring 00:39:37.322 10:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:37.322 10:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:39:37.322 10:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:39:37.322 10:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 120919 ']' 00:39:37.322 10:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 120919 00:39:37.322 10:11:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 120919 ']' 00:39:37.322 10:11:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 120919 00:39:37.322 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (120919) - No such process 00:39:37.322 Process with pid 120919 is not found 00:39:37.322 10:11:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 120919 is not found' 00:39:37.322 10:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:37.322 10:11:24 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:39:37.584 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:37.584 Waiting for block devices as requested 00:39:37.844 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:39:37.844 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:39:37.844 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:37.844 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:37.845 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:39:37.845 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:37.845 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:39:37.845 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:39:37.845 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:37.845 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:39:37.845 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:39:37.845 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:39:38.103 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:39:38.103 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:39:38.103 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:39:38.103 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:39:38.103 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:39:38.103 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:39:38.103 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:39:38.103 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:39:38.103 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:39:38.103 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:38.103 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:38.103 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:39:38.103 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:38.103 10:11:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:38.103 10:11:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:38.103 10:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:39:38.103 00:39:38.103 real 0m28.422s 00:39:38.103 user 0m54.727s 00:39:38.103 sys 0m7.564s 00:39:38.103 10:11:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:38.103 10:11:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:38.103 ************************************ 00:39:38.103 END TEST nvmf_abort_qd_sizes 00:39:38.104 ************************************ 00:39:38.362 10:11:25 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:39:38.362 10:11:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:38.362 10:11:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:38.362 10:11:25 -- common/autotest_common.sh@10 -- # set +x 00:39:38.362 ************************************ 00:39:38.362 START TEST keyring_file 00:39:38.362 ************************************ 00:39:38.362 10:11:25 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:39:38.362 * Looking for test storage... 00:39:38.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:39:38.362 10:11:25 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:38.362 10:11:25 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:38.362 10:11:25 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:39:38.362 10:11:25 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@345 -- # : 1 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@353 -- # local d=1 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@355 -- # echo 1 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:39:38.362 10:11:25 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:39:38.363 10:11:25 keyring_file -- scripts/common.sh@353 -- # local d=2 00:39:38.363 10:11:25 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:38.363 10:11:25 keyring_file -- scripts/common.sh@355 -- # echo 2 00:39:38.363 10:11:25 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:39:38.363 10:11:25 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:38.363 10:11:25 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:38.363 10:11:25 keyring_file -- scripts/common.sh@368 -- # return 0 00:39:38.363 10:11:25 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:38.363 10:11:25 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:38.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.363 --rc genhtml_branch_coverage=1 00:39:38.363 --rc genhtml_function_coverage=1 00:39:38.363 --rc genhtml_legend=1 00:39:38.363 --rc geninfo_all_blocks=1 00:39:38.363 --rc geninfo_unexecuted_blocks=1 00:39:38.363 00:39:38.363 ' 00:39:38.363 10:11:25 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:38.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.363 --rc genhtml_branch_coverage=1 00:39:38.363 --rc genhtml_function_coverage=1 00:39:38.363 --rc genhtml_legend=1 00:39:38.363 --rc geninfo_all_blocks=1 00:39:38.363 --rc geninfo_unexecuted_blocks=1 00:39:38.363 00:39:38.363 ' 00:39:38.363 10:11:25 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:38.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.363 --rc genhtml_branch_coverage=1 00:39:38.363 --rc genhtml_function_coverage=1 00:39:38.363 --rc genhtml_legend=1 00:39:38.363 --rc geninfo_all_blocks=1 00:39:38.363 --rc geninfo_unexecuted_blocks=1 00:39:38.363 00:39:38.363 ' 00:39:38.363 10:11:25 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:38.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.363 --rc genhtml_branch_coverage=1 00:39:38.363 --rc genhtml_function_coverage=1 00:39:38.363 --rc genhtml_legend=1 00:39:38.363 --rc geninfo_all_blocks=1 00:39:38.363 --rc geninfo_unexecuted_blocks=1 00:39:38.363 00:39:38.363 ' 00:39:38.363 10:11:25 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:39:38.363 10:11:25 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:38.363 10:11:25 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:38.363 10:11:25 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:38.363 10:11:25 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:38.363 10:11:25 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:38.363 10:11:25 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:38.363 10:11:25 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:38.363 10:11:25 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:38.363 10:11:25 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:38.363 10:11:25 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:38.363 10:11:25 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:38.363 10:11:25 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:38.363 10:11:25 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:39:38.363 10:11:25 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:39:38.363 10:11:25 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:38.363 10:11:25 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:38.363 10:11:25 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:38.363 10:11:25 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:38.363 10:11:25 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:38.363 10:11:25 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:39:38.622 10:11:25 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:38.622 10:11:25 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:38.622 10:11:25 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:38.622 10:11:25 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.622 10:11:25 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.622 10:11:25 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.622 10:11:25 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:38.622 10:11:25 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.622 10:11:25 keyring_file -- nvmf/common.sh@51 -- # : 0 00:39:38.622 10:11:25 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:38.622 10:11:25 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:38.622 10:11:25 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:38.622 10:11:25 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:38.622 10:11:25 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:38.622 10:11:25 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:38.622 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:38.622 10:11:25 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:38.622 10:11:25 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:38.622 10:11:25 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:38.622 10:11:25 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:38.622 10:11:25 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:38.622 10:11:25 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:38.622 10:11:25 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:38.622 10:11:25 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:38.622 10:11:25 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:38.622 10:11:25 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:38.622 10:11:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:38.622 10:11:25 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:38.622 10:11:25 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:38.622 10:11:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:38.622 10:11:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:38.622 10:11:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.7pJaG8fYWk 00:39:38.622 10:11:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:38.622 10:11:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:38.622 10:11:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:38.622 10:11:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:38.622 10:11:26 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:38.622 10:11:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:38.622 10:11:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:38.622 10:11:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7pJaG8fYWk 00:39:38.622 10:11:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.7pJaG8fYWk 00:39:38.622 10:11:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.7pJaG8fYWk 00:39:38.622 10:11:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:38.622 10:11:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:38.622 10:11:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:38.622 10:11:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:38.622 10:11:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:38.622 10:11:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:38.622 10:11:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.55nsnRY1Dg 00:39:38.622 10:11:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:38.622 10:11:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:38.622 10:11:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:38.623 10:11:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:38.623 10:11:26 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:38.623 10:11:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:38.623 10:11:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:38.623 10:11:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.55nsnRY1Dg 00:39:38.623 10:11:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.55nsnRY1Dg 00:39:38.623 10:11:26 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.55nsnRY1Dg 00:39:38.623 10:11:26 keyring_file -- keyring/file.sh@30 -- # tgtpid=121951 00:39:38.623 10:11:26 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:38.623 10:11:26 keyring_file -- keyring/file.sh@32 -- # waitforlisten 121951 00:39:38.623 10:11:26 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 121951 ']' 00:39:38.623 10:11:26 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:38.623 10:11:26 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:38.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:38.623 10:11:26 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:38.623 10:11:26 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:38.623 10:11:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:38.881 [2024-12-05 10:11:26.265397] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:39:38.881 [2024-12-05 10:11:26.265591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121951 ] 00:39:38.881 [2024-12-05 10:11:26.461147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.140 [2024-12-05 10:11:26.608021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:40.074 10:11:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:40.074 [2024-12-05 10:11:27.352701] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:40.074 null0 00:39:40.074 [2024-12-05 10:11:27.384713] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:40.074 [2024-12-05 10:11:27.385004] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.074 10:11:27 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:40.074 [2024-12-05 10:11:27.416679] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:40.074 2024/12/05 10:11:27 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:39:40.074 request: 00:39:40.074 { 00:39:40.074 "method": "nvmf_subsystem_add_listener", 00:39:40.074 "params": { 00:39:40.074 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:40.074 "secure_channel": false, 00:39:40.074 "listen_address": { 00:39:40.074 "trtype": "tcp", 00:39:40.074 "traddr": "127.0.0.1", 00:39:40.074 "trsvcid": "4420" 00:39:40.074 } 00:39:40.074 } 00:39:40.074 } 00:39:40.074 Got JSON-RPC error response 00:39:40.074 GoRPCClient: error on JSON-RPC call 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:40.074 10:11:27 keyring_file -- keyring/file.sh@47 -- # bperfpid=121982 00:39:40.074 10:11:27 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:40.074 10:11:27 keyring_file -- keyring/file.sh@49 -- # waitforlisten 121982 /var/tmp/bperf.sock 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 121982 ']' 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:40.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:40.074 10:11:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:40.074 [2024-12-05 10:11:27.544845] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:39:40.074 [2024-12-05 10:11:27.545042] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121982 ] 00:39:40.332 [2024-12-05 10:11:27.735096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:40.332 [2024-12-05 10:11:27.880535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:40.898 10:11:28 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:40.898 10:11:28 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:40.898 10:11:28 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7pJaG8fYWk 00:39:40.898 10:11:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7pJaG8fYWk 00:39:41.156 10:11:28 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.55nsnRY1Dg 00:39:41.156 10:11:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.55nsnRY1Dg 00:39:41.415 10:11:28 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:39:41.415 10:11:28 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:41.415 10:11:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:41.415 10:11:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:41.415 10:11:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:41.673 10:11:29 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.7pJaG8fYWk == \/\t\m\p\/\t\m\p\.\7\p\J\a\G\8\f\Y\W\k ]] 00:39:41.673 10:11:29 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:39:41.673 10:11:29 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:39:41.673 10:11:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:41.673 10:11:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:41.673 10:11:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:41.932 10:11:29 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.55nsnRY1Dg == \/\t\m\p\/\t\m\p\.\5\5\n\s\n\R\Y\1\D\g ]] 00:39:41.932 10:11:29 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:39:41.932 10:11:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:41.932 10:11:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:41.932 10:11:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:41.932 10:11:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:41.932 10:11:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:42.190 10:11:29 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:42.190 10:11:29 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:39:42.190 10:11:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:42.190 10:11:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:42.190 10:11:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:42.190 10:11:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:42.190 10:11:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:42.450 10:11:30 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:39:42.450 10:11:30 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:42.450 10:11:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:42.708 [2024-12-05 10:11:30.212582] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:42.708 nvme0n1 00:39:42.708 10:11:30 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:39:42.708 10:11:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:42.708 10:11:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:42.708 10:11:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:42.708 10:11:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:42.708 10:11:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:42.966 10:11:30 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:39:42.966 10:11:30 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:39:42.966 10:11:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:42.966 10:11:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:42.966 10:11:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:42.966 10:11:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:42.966 10:11:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:43.224 10:11:30 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:39:43.224 10:11:30 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:43.482 Running I/O for 1 seconds... 00:39:44.417 8442.00 IOPS, 32.98 MiB/s 00:39:44.417 Latency(us) 00:39:44.417 [2024-12-05T10:11:32.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:44.417 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:44.417 nvme0n1 : 1.02 8443.60 32.98 0.00 0.00 15040.44 11617.75 31457.28 00:39:44.417 [2024-12-05T10:11:32.046Z] =================================================================================================================== 00:39:44.417 [2024-12-05T10:11:32.046Z] Total : 8443.60 32.98 0.00 0.00 15040.44 11617.75 31457.28 00:39:44.417 { 00:39:44.417 "results": [ 00:39:44.417 { 00:39:44.417 "job": "nvme0n1", 00:39:44.417 "core_mask": "0x2", 00:39:44.417 "workload": "randrw", 00:39:44.417 "percentage": 50, 00:39:44.417 "status": "finished", 00:39:44.417 "queue_depth": 128, 00:39:44.417 "io_size": 4096, 00:39:44.417 "runtime": 1.015088, 00:39:44.417 "iops": 8443.602919155777, 00:39:44.417 "mibps": 32.982823902952255, 00:39:44.417 "io_failed": 0, 00:39:44.417 "io_timeout": 0, 00:39:44.417 "avg_latency_us": 15040.435885066981, 00:39:44.417 "min_latency_us": 11617.745454545455, 00:39:44.417 "max_latency_us": 31457.28 00:39:44.417 } 00:39:44.417 ], 00:39:44.417 "core_count": 1 00:39:44.417 } 00:39:44.417 10:11:31 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:44.417 10:11:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:44.676 10:11:32 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:39:44.676 10:11:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:44.676 10:11:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:44.676 10:11:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:44.676 10:11:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:44.676 10:11:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:44.935 10:11:32 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:44.935 10:11:32 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:39:44.935 10:11:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:44.935 10:11:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:44.935 10:11:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:44.935 10:11:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:44.935 10:11:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:45.194 10:11:32 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:39:45.194 10:11:32 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:45.194 10:11:32 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:45.194 10:11:32 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:45.194 10:11:32 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:45.194 10:11:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:45.194 10:11:32 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:45.194 10:11:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:45.194 10:11:32 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:45.194 10:11:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:45.453 [2024-12-05 10:11:32.962155] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:45.453 [2024-12-05 10:11:32.962347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:39:45.453 [2024-12-05 10:11:32.963316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:39:45.453 [2024-12-05 10:11:32.964309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:45.453 [2024-12-05 10:11:32.964342] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:45.453 [2024-12-05 10:11:32.964359] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:45.453 [2024-12-05 10:11:32.964380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:45.453 2024/12/05 10:11:32 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:39:45.453 request: 00:39:45.453 { 00:39:45.453 "method": "bdev_nvme_attach_controller", 00:39:45.453 "params": { 00:39:45.453 "name": "nvme0", 00:39:45.453 "trtype": "tcp", 00:39:45.453 "traddr": "127.0.0.1", 00:39:45.453 "adrfam": "ipv4", 00:39:45.453 "trsvcid": "4420", 00:39:45.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:45.453 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:45.453 "prchk_reftag": false, 00:39:45.453 "prchk_guard": false, 00:39:45.453 "hdgst": false, 00:39:45.453 "ddgst": false, 00:39:45.453 "psk": "key1", 00:39:45.453 "allow_unrecognized_csi": false 00:39:45.453 } 00:39:45.453 } 00:39:45.453 Got JSON-RPC error response 00:39:45.453 GoRPCClient: error on JSON-RPC call 00:39:45.453 10:11:32 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:45.453 10:11:32 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:45.453 10:11:32 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:45.453 10:11:32 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:45.453 10:11:32 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:39:45.453 10:11:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:45.453 10:11:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:45.453 10:11:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:45.453 10:11:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:45.453 10:11:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:45.712 10:11:33 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:45.712 10:11:33 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:39:45.712 10:11:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:45.712 10:11:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:45.712 10:11:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:45.712 10:11:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:45.712 10:11:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:45.971 10:11:33 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:39:45.971 10:11:33 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:39:45.971 10:11:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:46.230 10:11:33 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:39:46.230 10:11:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:46.490 10:11:34 keyring_file -- keyring/file.sh@78 -- # jq length 00:39:46.490 10:11:34 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:39:46.490 10:11:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:46.750 10:11:34 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:39:46.750 10:11:34 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.7pJaG8fYWk 00:39:46.750 10:11:34 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.7pJaG8fYWk 00:39:46.750 10:11:34 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:46.750 10:11:34 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.7pJaG8fYWk 00:39:46.750 10:11:34 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:46.750 10:11:34 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:46.750 10:11:34 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:46.750 10:11:34 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:46.750 10:11:34 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7pJaG8fYWk 00:39:46.750 10:11:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7pJaG8fYWk 00:39:47.009 [2024-12-05 10:11:34.569164] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.7pJaG8fYWk': 0100660 00:39:47.009 [2024-12-05 10:11:34.569214] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:47.009 2024/12/05 10:11:34 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.7pJaG8fYWk], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:39:47.009 request: 00:39:47.009 { 00:39:47.009 "method": "keyring_file_add_key", 00:39:47.010 "params": { 00:39:47.010 "name": "key0", 00:39:47.010 "path": "/tmp/tmp.7pJaG8fYWk" 00:39:47.010 } 00:39:47.010 } 00:39:47.010 Got JSON-RPC error response 00:39:47.010 GoRPCClient: error on JSON-RPC call 00:39:47.010 10:11:34 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:47.010 10:11:34 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:47.010 10:11:34 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:47.010 10:11:34 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:47.010 10:11:34 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.7pJaG8fYWk 00:39:47.010 10:11:34 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7pJaG8fYWk 00:39:47.010 10:11:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7pJaG8fYWk 00:39:47.268 10:11:34 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.7pJaG8fYWk 00:39:47.268 10:11:34 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:39:47.268 10:11:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:47.268 10:11:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:47.268 10:11:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:47.268 10:11:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:47.268 10:11:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:47.836 10:11:35 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:39:47.836 10:11:35 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:47.836 10:11:35 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:47.836 10:11:35 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:47.836 10:11:35 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:47.836 10:11:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:47.836 10:11:35 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:47.836 10:11:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:47.836 10:11:35 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:47.836 10:11:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:47.836 [2024-12-05 10:11:35.449387] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.7pJaG8fYWk': No such file or directory 00:39:47.836 [2024-12-05 10:11:35.449437] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:47.836 [2024-12-05 10:11:35.449463] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:47.836 [2024-12-05 10:11:35.449478] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:39:47.837 [2024-12-05 10:11:35.449492] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:47.837 [2024-12-05 10:11:35.449513] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:47.837 2024/12/05 10:11:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:39:47.837 request: 00:39:47.837 { 00:39:47.837 "method": "bdev_nvme_attach_controller", 00:39:47.837 "params": { 00:39:47.837 "name": "nvme0", 00:39:47.837 "trtype": "tcp", 00:39:47.837 "traddr": "127.0.0.1", 00:39:47.837 "adrfam": "ipv4", 00:39:47.837 "trsvcid": "4420", 00:39:47.837 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:47.837 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:47.837 "prchk_reftag": false, 00:39:47.837 "prchk_guard": false, 00:39:47.837 "hdgst": false, 00:39:47.837 "ddgst": false, 00:39:47.837 "psk": "key0", 00:39:47.837 "allow_unrecognized_csi": false 00:39:47.837 } 00:39:47.837 } 00:39:47.837 Got JSON-RPC error response 00:39:47.837 GoRPCClient: error on JSON-RPC call 00:39:48.094 10:11:35 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:48.094 10:11:35 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:48.094 10:11:35 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:48.094 10:11:35 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:48.094 10:11:35 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:39:48.094 10:11:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:48.352 10:11:35 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:48.352 10:11:35 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:48.352 10:11:35 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:48.352 10:11:35 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:48.352 10:11:35 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:48.352 10:11:35 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:48.352 10:11:35 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DgbcXT065P 00:39:48.352 10:11:35 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:48.352 10:11:35 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:48.352 10:11:35 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:48.352 10:11:35 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:48.352 10:11:35 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:48.352 10:11:35 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:48.352 10:11:35 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:48.352 10:11:35 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DgbcXT065P 00:39:48.352 10:11:35 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DgbcXT065P 00:39:48.352 10:11:35 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.DgbcXT065P 00:39:48.352 10:11:35 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DgbcXT065P 00:39:48.352 10:11:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DgbcXT065P 00:39:48.610 10:11:36 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:48.610 10:11:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:48.867 nvme0n1 00:39:48.867 10:11:36 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:39:48.867 10:11:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:48.867 10:11:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:48.867 10:11:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:48.867 10:11:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:48.867 10:11:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:49.124 10:11:36 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:39:49.124 10:11:36 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:39:49.124 10:11:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:49.382 10:11:36 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:39:49.382 10:11:36 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:39:49.382 10:11:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:49.382 10:11:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:49.382 10:11:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:49.640 10:11:37 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:39:49.640 10:11:37 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:39:49.640 10:11:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:49.640 10:11:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:49.640 10:11:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:49.640 10:11:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:49.640 10:11:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:49.898 10:11:37 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:39:49.898 10:11:37 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:49.898 10:11:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:50.156 10:11:37 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:39:50.156 10:11:37 keyring_file -- keyring/file.sh@105 -- # jq length 00:39:50.156 10:11:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:50.415 10:11:37 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:39:50.415 10:11:37 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DgbcXT065P 00:39:50.415 10:11:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DgbcXT065P 00:39:50.674 10:11:38 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.55nsnRY1Dg 00:39:50.674 10:11:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.55nsnRY1Dg 00:39:51.241 10:11:38 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:51.241 10:11:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:51.500 nvme0n1 00:39:51.500 10:11:38 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:39:51.500 10:11:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:51.759 10:11:39 keyring_file -- keyring/file.sh@113 -- # config='{ 00:39:51.759 "subsystems": [ 00:39:51.759 { 00:39:51.759 "subsystem": "keyring", 00:39:51.759 "config": [ 00:39:51.759 { 00:39:51.759 "method": "keyring_file_add_key", 00:39:51.759 "params": { 00:39:51.759 "name": "key0", 00:39:51.759 "path": "/tmp/tmp.DgbcXT065P" 00:39:51.759 } 00:39:51.759 }, 00:39:51.759 { 00:39:51.759 "method": "keyring_file_add_key", 00:39:51.759 "params": { 00:39:51.759 "name": "key1", 00:39:51.759 "path": "/tmp/tmp.55nsnRY1Dg" 00:39:51.759 } 00:39:51.759 } 00:39:51.759 ] 00:39:51.759 }, 00:39:51.759 { 00:39:51.759 "subsystem": "iobuf", 00:39:51.759 "config": [ 00:39:51.759 { 00:39:51.759 "method": "iobuf_set_options", 00:39:51.759 "params": { 00:39:51.759 "enable_numa": false, 00:39:51.759 "large_bufsize": 135168, 00:39:51.759 "large_pool_count": 1024, 00:39:51.759 "small_bufsize": 8192, 00:39:51.759 "small_pool_count": 8192 00:39:51.759 } 00:39:51.759 } 00:39:51.759 ] 00:39:51.759 }, 00:39:51.759 { 00:39:51.759 "subsystem": "sock", 00:39:51.759 "config": [ 00:39:51.759 { 00:39:51.759 "method": "sock_set_default_impl", 00:39:51.759 "params": { 00:39:51.759 "impl_name": "posix" 00:39:51.759 } 00:39:51.759 }, 00:39:51.759 { 00:39:51.759 "method": "sock_impl_set_options", 00:39:51.759 "params": { 00:39:51.759 "enable_ktls": false, 00:39:51.759 "enable_placement_id": 0, 00:39:51.759 "enable_quickack": false, 00:39:51.759 "enable_recv_pipe": true, 00:39:51.759 "enable_zerocopy_send_client": false, 00:39:51.759 "enable_zerocopy_send_server": true, 00:39:51.759 "impl_name": "ssl", 00:39:51.759 "recv_buf_size": 4096, 00:39:51.759 "send_buf_size": 4096, 00:39:51.759 "tls_version": 0, 00:39:51.759 "zerocopy_threshold": 0 00:39:51.759 } 00:39:51.759 }, 00:39:51.759 { 00:39:51.759 "method": "sock_impl_set_options", 00:39:51.759 "params": { 00:39:51.759 "enable_ktls": false, 00:39:51.759 "enable_placement_id": 0, 00:39:51.759 "enable_quickack": false, 00:39:51.759 "enable_recv_pipe": true, 00:39:51.759 "enable_zerocopy_send_client": false, 00:39:51.759 "enable_zerocopy_send_server": true, 00:39:51.759 "impl_name": "posix", 00:39:51.759 "recv_buf_size": 2097152, 00:39:51.759 "send_buf_size": 2097152, 00:39:51.759 "tls_version": 0, 00:39:51.759 "zerocopy_threshold": 0 00:39:51.759 } 00:39:51.759 } 00:39:51.759 ] 00:39:51.759 }, 00:39:51.759 { 00:39:51.759 "subsystem": "vmd", 00:39:51.759 "config": [] 00:39:51.759 }, 00:39:51.759 { 00:39:51.759 "subsystem": "accel", 00:39:51.759 "config": [ 00:39:51.759 { 00:39:51.759 "method": "accel_set_options", 00:39:51.759 "params": { 00:39:51.759 "buf_count": 2048, 00:39:51.759 "large_cache_size": 16, 00:39:51.759 "sequence_count": 2048, 00:39:51.759 "small_cache_size": 128, 00:39:51.759 "task_count": 2048 00:39:51.759 } 00:39:51.759 } 00:39:51.759 ] 00:39:51.759 }, 00:39:51.759 { 00:39:51.759 "subsystem": "bdev", 00:39:51.759 "config": [ 00:39:51.759 { 00:39:51.759 "method": "bdev_set_options", 00:39:51.759 "params": { 00:39:51.759 "bdev_auto_examine": true, 00:39:51.759 "bdev_io_cache_size": 256, 00:39:51.759 "bdev_io_pool_size": 65535, 00:39:51.759 "iobuf_large_cache_size": 16, 00:39:51.759 "iobuf_small_cache_size": 128 00:39:51.759 } 00:39:51.759 }, 00:39:51.759 { 00:39:51.759 "method": "bdev_raid_set_options", 00:39:51.759 "params": { 00:39:51.759 "process_max_bandwidth_mb_sec": 0, 00:39:51.759 "process_window_size_kb": 1024 00:39:51.759 } 00:39:51.759 }, 00:39:51.759 { 00:39:51.759 "method": "bdev_iscsi_set_options", 00:39:51.759 "params": { 00:39:51.759 "timeout_sec": 30 00:39:51.759 } 00:39:51.759 }, 00:39:51.759 { 00:39:51.759 "method": "bdev_nvme_set_options", 00:39:51.759 "params": { 00:39:51.759 "action_on_timeout": "none", 00:39:51.759 "allow_accel_sequence": false, 00:39:51.759 "arbitration_burst": 0, 00:39:51.759 "bdev_retry_count": 3, 00:39:51.759 "ctrlr_loss_timeout_sec": 0, 00:39:51.759 "delay_cmd_submit": true, 00:39:51.759 "dhchap_dhgroups": [ 00:39:51.759 "null", 00:39:51.759 "ffdhe2048", 00:39:51.759 "ffdhe3072", 00:39:51.759 "ffdhe4096", 00:39:51.759 "ffdhe6144", 00:39:51.759 "ffdhe8192" 00:39:51.759 ], 00:39:51.759 "dhchap_digests": [ 00:39:51.759 "sha256", 00:39:51.759 "sha384", 00:39:51.759 "sha512" 00:39:51.759 ], 00:39:51.759 "disable_auto_failback": false, 00:39:51.759 "fast_io_fail_timeout_sec": 0, 00:39:51.759 "generate_uuids": false, 00:39:51.759 "high_priority_weight": 0, 00:39:51.759 "io_path_stat": false, 00:39:51.759 "io_queue_requests": 512, 00:39:51.759 "keep_alive_timeout_ms": 10000, 00:39:51.759 "low_priority_weight": 0, 00:39:51.759 "medium_priority_weight": 0, 00:39:51.759 "nvme_adminq_poll_period_us": 10000, 00:39:51.759 "nvme_error_stat": false, 00:39:51.759 "nvme_ioq_poll_period_us": 0, 00:39:51.759 "rdma_cm_event_timeout_ms": 0, 00:39:51.759 "rdma_max_cq_size": 0, 00:39:51.759 "rdma_srq_size": 0, 00:39:51.759 "reconnect_delay_sec": 0, 00:39:51.759 "timeout_admin_us": 0, 00:39:51.759 "timeout_us": 0, 00:39:51.759 "transport_ack_timeout": 0, 00:39:51.759 "transport_retry_count": 4, 00:39:51.759 "transport_tos": 0 00:39:51.759 } 00:39:51.759 }, 00:39:51.759 { 00:39:51.759 "method": "bdev_nvme_attach_controller", 00:39:51.759 "params": { 00:39:51.759 "adrfam": "IPv4", 00:39:51.759 "ctrlr_loss_timeout_sec": 0, 00:39:51.759 "ddgst": false, 00:39:51.759 "fast_io_fail_timeout_sec": 0, 00:39:51.759 "hdgst": false, 00:39:51.759 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:51.759 "multipath": "multipath", 00:39:51.759 "name": "nvme0", 00:39:51.759 "prchk_guard": false, 00:39:51.759 "prchk_reftag": false, 00:39:51.759 "psk": "key0", 00:39:51.759 "reconnect_delay_sec": 0, 00:39:51.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:51.759 "traddr": "127.0.0.1", 00:39:51.759 "trsvcid": "4420", 00:39:51.759 "trtype": "TCP" 00:39:51.759 } 00:39:51.759 }, 00:39:51.759 { 00:39:51.759 "method": "bdev_nvme_set_hotplug", 00:39:51.759 "params": { 00:39:51.759 "enable": false, 00:39:51.759 "period_us": 100000 00:39:51.759 } 00:39:51.759 }, 00:39:51.759 { 00:39:51.759 "method": "bdev_wait_for_examine" 00:39:51.759 } 00:39:51.759 ] 00:39:51.759 }, 00:39:51.759 { 00:39:51.759 "subsystem": "nbd", 00:39:51.759 "config": [] 00:39:51.759 } 00:39:51.759 ] 00:39:51.759 }' 00:39:51.759 10:11:39 keyring_file -- keyring/file.sh@115 -- # killprocess 121982 00:39:51.760 10:11:39 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 121982 ']' 00:39:51.760 10:11:39 keyring_file -- common/autotest_common.sh@958 -- # kill -0 121982 00:39:51.760 10:11:39 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:51.760 10:11:39 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:51.760 10:11:39 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 121982 00:39:51.760 killing process with pid 121982 00:39:51.760 Received shutdown signal, test time was about 1.000000 seconds 00:39:51.760 00:39:51.760 Latency(us) 00:39:51.760 [2024-12-05T10:11:39.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:51.760 [2024-12-05T10:11:39.389Z] =================================================================================================================== 00:39:51.760 [2024-12-05T10:11:39.389Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:51.760 10:11:39 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:51.760 10:11:39 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:51.760 10:11:39 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 121982' 00:39:51.760 10:11:39 keyring_file -- common/autotest_common.sh@973 -- # kill 121982 00:39:51.760 10:11:39 keyring_file -- common/autotest_common.sh@978 -- # wait 121982 00:39:52.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:52.697 10:11:40 keyring_file -- keyring/file.sh@118 -- # bperfpid=122462 00:39:52.697 10:11:40 keyring_file -- keyring/file.sh@120 -- # waitforlisten 122462 /var/tmp/bperf.sock 00:39:52.697 10:11:40 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 122462 ']' 00:39:52.697 10:11:40 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:52.697 10:11:40 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:52.697 10:11:40 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:52.697 10:11:40 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:52.697 10:11:40 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:52.697 10:11:40 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:39:52.697 "subsystems": [ 00:39:52.697 { 00:39:52.697 "subsystem": "keyring", 00:39:52.697 "config": [ 00:39:52.697 { 00:39:52.697 "method": "keyring_file_add_key", 00:39:52.697 "params": { 00:39:52.697 "name": "key0", 00:39:52.697 "path": "/tmp/tmp.DgbcXT065P" 00:39:52.697 } 00:39:52.697 }, 00:39:52.697 { 00:39:52.697 "method": "keyring_file_add_key", 00:39:52.697 "params": { 00:39:52.697 "name": "key1", 00:39:52.697 "path": "/tmp/tmp.55nsnRY1Dg" 00:39:52.697 } 00:39:52.697 } 00:39:52.697 ] 00:39:52.697 }, 00:39:52.697 { 00:39:52.697 "subsystem": "iobuf", 00:39:52.697 "config": [ 00:39:52.697 { 00:39:52.697 "method": "iobuf_set_options", 00:39:52.697 "params": { 00:39:52.697 "enable_numa": false, 00:39:52.697 "large_bufsize": 135168, 00:39:52.697 "large_pool_count": 1024, 00:39:52.697 "small_bufsize": 8192, 00:39:52.697 "small_pool_count": 8192 00:39:52.697 } 00:39:52.697 } 00:39:52.697 ] 00:39:52.697 }, 00:39:52.697 { 00:39:52.697 "subsystem": "sock", 00:39:52.697 "config": [ 00:39:52.697 { 00:39:52.697 "method": "sock_set_default_impl", 00:39:52.697 "params": { 00:39:52.697 "impl_name": "posix" 00:39:52.697 } 00:39:52.697 }, 00:39:52.697 { 00:39:52.697 "method": "sock_impl_set_options", 00:39:52.697 "params": { 00:39:52.697 "enable_ktls": false, 00:39:52.697 "enable_placement_id": 0, 00:39:52.697 "enable_quickack": false, 00:39:52.697 "enable_recv_pipe": true, 00:39:52.697 "enable_zerocopy_send_client": false, 00:39:52.697 "enable_zerocopy_send_server": true, 00:39:52.697 "impl_name": "ssl", 00:39:52.697 "recv_buf_size": 4096, 00:39:52.697 "send_buf_size": 4096, 00:39:52.697 "tls_version": 0, 00:39:52.697 "zerocopy_threshold": 0 00:39:52.697 } 00:39:52.697 }, 00:39:52.697 { 00:39:52.697 "method": "sock_impl_set_options", 00:39:52.697 "params": { 00:39:52.697 "enable_ktls": false, 00:39:52.697 "enable_placement_id": 0, 00:39:52.697 "enable_quickack": false, 00:39:52.697 "enable_recv_pipe": true, 00:39:52.697 "enable_zerocopy_send_client": false, 00:39:52.697 "enable_zerocopy_send_server": true, 00:39:52.697 "impl_name": "posix", 00:39:52.697 "recv_buf_size": 2097152, 00:39:52.697 "send_buf_size": 2097152, 00:39:52.697 "tls_version": 0, 00:39:52.697 "zerocopy_threshold": 0 00:39:52.697 } 00:39:52.697 } 00:39:52.697 ] 00:39:52.697 }, 00:39:52.697 { 00:39:52.697 "subsystem": "vmd", 00:39:52.697 "config": [] 00:39:52.697 }, 00:39:52.697 { 00:39:52.697 "subsystem": "accel", 00:39:52.697 "config": [ 00:39:52.697 { 00:39:52.697 "method": "accel_set_options", 00:39:52.697 "params": { 00:39:52.697 "buf_count": 2048, 00:39:52.697 "large_cache_size": 16, 00:39:52.697 "sequence_count": 2048, 00:39:52.697 "small_cache_size": 128, 00:39:52.697 "task_count": 2048 00:39:52.697 } 00:39:52.697 } 00:39:52.697 ] 00:39:52.697 }, 00:39:52.697 { 00:39:52.697 "subsystem": "bdev", 00:39:52.697 "config": [ 00:39:52.697 { 00:39:52.697 "method": "bdev_set_options", 00:39:52.697 "params": { 00:39:52.697 "bdev_auto_examine": true, 00:39:52.697 "bdev_io_cache_size": 256, 00:39:52.697 "bdev_io_pool_size": 65535, 00:39:52.697 "iobuf_large_cache_size": 16, 00:39:52.697 "iobuf_small_cache_size": 128 00:39:52.697 } 00:39:52.697 }, 00:39:52.697 { 00:39:52.697 "method": "bdev_raid_set_options", 00:39:52.697 "params": { 00:39:52.697 "process_max_bandwidth_mb_sec": 0, 00:39:52.697 "process_window_size_kb": 1024 00:39:52.697 } 00:39:52.697 }, 00:39:52.697 { 00:39:52.697 "method": "bdev_iscsi_set_options", 00:39:52.697 "params": { 00:39:52.697 "timeout_sec": 30 00:39:52.697 } 00:39:52.697 }, 00:39:52.697 { 00:39:52.697 "method": "bdev_nvme_set_options", 00:39:52.697 "params": { 00:39:52.697 "action_on_timeout": "none", 00:39:52.697 "allow_accel_sequence": false, 00:39:52.697 "arbitration_burst": 0, 00:39:52.697 "bdev_retry_count": 3, 00:39:52.697 "ctrlr_loss_timeout_sec": 0, 00:39:52.697 "delay_cmd_submit": true, 00:39:52.697 "dhchap_dhgroups": [ 00:39:52.697 "null", 00:39:52.697 "ffdhe2048", 00:39:52.697 "ffdhe3072", 00:39:52.697 "ffdhe4096", 00:39:52.697 "ffdhe6144", 00:39:52.697 "ffdhe8192" 00:39:52.697 ], 00:39:52.697 "dhchap_digests": [ 00:39:52.697 "sha256", 00:39:52.697 "sha384", 00:39:52.697 "sha512" 00:39:52.697 ], 00:39:52.697 "disable_auto_failback": false, 00:39:52.697 "fast_io_fail_timeout_sec": 0, 00:39:52.697 "generate_uuids": false, 00:39:52.697 "high_priority_weight": 0, 00:39:52.697 "io_path_stat": false, 00:39:52.697 "io_queue_requests": 512, 00:39:52.697 "keep_alive_timeout_ms": 10000, 00:39:52.697 "low_priority_weight": 0, 00:39:52.697 "medium_priority_weight": 0, 00:39:52.697 "nvme_adminq_poll_period_us": 10000, 00:39:52.697 "nvme_error_stat": false, 00:39:52.697 "nvme_ioq_poll_period_us": 0, 00:39:52.697 "rdma_cm_event_timeout_ms": 0, 00:39:52.697 "rdma_max_cq_size": 0, 00:39:52.697 "rdma_srq_size": 0, 00:39:52.697 "reconnect_delay_sec": 0, 00:39:52.697 "timeout_admin_us": 0, 00:39:52.697 "timeout_us": 0, 00:39:52.697 "transport_ack_timeout": 0, 00:39:52.697 "transport_retry_count": 4, 00:39:52.697 "transport_tos": 0 00:39:52.697 } 00:39:52.697 }, 00:39:52.697 { 00:39:52.697 "method": "bdev_nvme_attach_controller", 00:39:52.697 "params": { 00:39:52.698 "adrfam": "IPv4", 00:39:52.698 "ctrlr_loss_timeout_sec": 0, 00:39:52.698 "ddgst": false, 00:39:52.698 "fast_io_fail_timeout_sec": 0, 00:39:52.698 "hdgst": false, 00:39:52.698 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:52.698 "multipath": "multipath", 00:39:52.698 "name": "nvme0", 00:39:52.698 "prchk_guard": false, 00:39:52.698 "prchk_reftag": false, 00:39:52.698 "psk": "key0", 00:39:52.698 "reconnect_delay_sec": 0, 00:39:52.698 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:52.698 "traddr": "127.0.0.1", 00:39:52.698 "trsvcid": "4420", 00:39:52.698 "trtype": "TCP" 00:39:52.698 } 00:39:52.698 }, 00:39:52.698 { 00:39:52.698 "method": "bdev_nvme_set_hotplug", 00:39:52.698 "params": { 00:39:52.698 "enable": false, 00:39:52.698 "period_us": 100000 00:39:52.698 } 00:39:52.698 }, 00:39:52.698 { 00:39:52.698 "method": "bdev_wait_for_examine" 00:39:52.698 } 00:39:52.698 ] 00:39:52.698 }, 00:39:52.698 { 00:39:52.698 "subsystem": "nbd", 00:39:52.698 "config": [] 00:39:52.698 } 00:39:52.698 ] 00:39:52.698 }' 00:39:52.698 10:11:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:52.698 [2024-12-05 10:11:40.198285] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:39:52.698 [2024-12-05 10:11:40.198456] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122462 ] 00:39:52.957 [2024-12-05 10:11:40.365396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:52.957 [2024-12-05 10:11:40.474723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:53.524 [2024-12-05 10:11:40.880162] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:53.524 10:11:41 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:53.524 10:11:41 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:53.783 10:11:41 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:39:53.783 10:11:41 keyring_file -- keyring/file.sh@121 -- # jq length 00:39:53.783 10:11:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:54.042 10:11:41 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:54.042 10:11:41 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:39:54.042 10:11:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:54.042 10:11:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:54.042 10:11:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:54.042 10:11:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:54.042 10:11:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:54.302 10:11:41 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:39:54.302 10:11:41 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:39:54.302 10:11:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:54.302 10:11:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:54.302 10:11:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:54.302 10:11:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:54.302 10:11:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:54.561 10:11:42 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:39:54.561 10:11:42 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:39:54.561 10:11:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:54.561 10:11:42 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:39:54.820 10:11:42 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:39:54.820 10:11:42 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:54.820 10:11:42 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.DgbcXT065P /tmp/tmp.55nsnRY1Dg 00:39:54.820 10:11:42 keyring_file -- keyring/file.sh@20 -- # killprocess 122462 00:39:54.820 10:11:42 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 122462 ']' 00:39:54.820 10:11:42 keyring_file -- common/autotest_common.sh@958 -- # kill -0 122462 00:39:54.820 10:11:42 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:54.820 10:11:42 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:54.820 10:11:42 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122462 00:39:54.820 killing process with pid 122462 00:39:54.820 Received shutdown signal, test time was about 1.000000 seconds 00:39:54.820 00:39:54.820 Latency(us) 00:39:54.820 [2024-12-05T10:11:42.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:54.820 [2024-12-05T10:11:42.449Z] =================================================================================================================== 00:39:54.820 [2024-12-05T10:11:42.449Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:54.820 10:11:42 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:54.820 10:11:42 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:54.820 10:11:42 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122462' 00:39:54.820 10:11:42 keyring_file -- common/autotest_common.sh@973 -- # kill 122462 00:39:54.820 10:11:42 keyring_file -- common/autotest_common.sh@978 -- # wait 122462 00:39:55.757 10:11:43 keyring_file -- keyring/file.sh@21 -- # killprocess 121951 00:39:55.757 10:11:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 121951 ']' 00:39:55.757 10:11:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 121951 00:39:55.758 10:11:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:55.758 10:11:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:55.758 10:11:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 121951 00:39:55.758 killing process with pid 121951 00:39:55.758 10:11:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:55.758 10:11:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:55.758 10:11:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 121951' 00:39:55.758 10:11:43 keyring_file -- common/autotest_common.sh@973 -- # kill 121951 00:39:55.758 10:11:43 keyring_file -- common/autotest_common.sh@978 -- # wait 121951 00:39:57.662 ************************************ 00:39:57.662 END TEST keyring_file 00:39:57.662 ************************************ 00:39:57.662 00:39:57.662 real 0m19.389s 00:39:57.662 user 0m44.617s 00:39:57.662 sys 0m3.725s 00:39:57.662 10:11:45 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:57.662 10:11:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:57.662 10:11:45 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:39:57.662 10:11:45 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:39:57.662 10:11:45 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:57.662 10:11:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:57.662 10:11:45 -- common/autotest_common.sh@10 -- # set +x 00:39:57.662 ************************************ 00:39:57.662 START TEST keyring_linux 00:39:57.662 ************************************ 00:39:57.662 10:11:45 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:39:57.662 Joined session keyring: 827577420 00:39:57.920 * Looking for test storage... 00:39:57.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:39:57.920 10:11:45 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:57.920 10:11:45 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:39:57.920 10:11:45 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:57.921 10:11:45 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@345 -- # : 1 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@368 -- # return 0 00:39:57.921 10:11:45 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:57.921 10:11:45 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:57.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.921 --rc genhtml_branch_coverage=1 00:39:57.921 --rc genhtml_function_coverage=1 00:39:57.921 --rc genhtml_legend=1 00:39:57.921 --rc geninfo_all_blocks=1 00:39:57.921 --rc geninfo_unexecuted_blocks=1 00:39:57.921 00:39:57.921 ' 00:39:57.921 10:11:45 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:57.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.921 --rc genhtml_branch_coverage=1 00:39:57.921 --rc genhtml_function_coverage=1 00:39:57.921 --rc genhtml_legend=1 00:39:57.921 --rc geninfo_all_blocks=1 00:39:57.921 --rc geninfo_unexecuted_blocks=1 00:39:57.921 00:39:57.921 ' 00:39:57.921 10:11:45 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:57.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.921 --rc genhtml_branch_coverage=1 00:39:57.921 --rc genhtml_function_coverage=1 00:39:57.921 --rc genhtml_legend=1 00:39:57.921 --rc geninfo_all_blocks=1 00:39:57.921 --rc geninfo_unexecuted_blocks=1 00:39:57.921 00:39:57.921 ' 00:39:57.921 10:11:45 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:57.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.921 --rc genhtml_branch_coverage=1 00:39:57.921 --rc genhtml_function_coverage=1 00:39:57.921 --rc genhtml_legend=1 00:39:57.921 --rc geninfo_all_blocks=1 00:39:57.921 --rc geninfo_unexecuted_blocks=1 00:39:57.921 00:39:57.921 ' 00:39:57.921 10:11:45 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:39:57.921 10:11:45 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=bf6a2f69-fcc6-4ee1-9d4a-7a6c47202516 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:57.921 10:11:45 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:57.921 10:11:45 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.921 10:11:45 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.921 10:11:45 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.921 10:11:45 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:57.921 10:11:45 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:57.921 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:57.921 10:11:45 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:57.921 10:11:45 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:57.921 10:11:45 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:57.921 10:11:45 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:57.921 10:11:45 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:57.921 10:11:45 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:57.921 10:11:45 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:57.921 10:11:45 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:57.921 10:11:45 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:57.921 10:11:45 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:57.921 10:11:45 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:57.921 10:11:45 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:57.921 10:11:45 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:57.921 10:11:45 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:57.921 /tmp/:spdk-test:key0 00:39:57.921 10:11:45 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:57.921 10:11:45 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:57.921 10:11:45 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:57.921 10:11:45 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:57.921 10:11:45 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:57.921 10:11:45 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:57.921 10:11:45 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:57.921 10:11:45 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:57.921 10:11:45 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:57.921 10:11:45 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:57.921 /tmp/:spdk-test:key1 00:39:57.921 10:11:45 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:57.921 10:11:45 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=122644 00:39:57.921 10:11:45 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:57.921 10:11:45 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 122644 00:39:57.921 10:11:45 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 122644 ']' 00:39:57.921 10:11:45 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:57.921 10:11:45 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:57.921 10:11:45 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:57.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:57.921 10:11:45 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:57.921 10:11:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:58.179 [2024-12-05 10:11:45.672250] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:39:58.179 [2024-12-05 10:11:45.672463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122644 ] 00:39:58.437 [2024-12-05 10:11:45.852107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:58.437 [2024-12-05 10:11:45.953098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:59.399 10:11:46 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:59.399 10:11:46 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:59.399 10:11:46 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:59.399 10:11:46 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:59.399 10:11:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:59.399 [2024-12-05 10:11:46.707372] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:59.399 null0 00:39:59.399 [2024-12-05 10:11:46.739348] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:59.399 [2024-12-05 10:11:46.739691] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:59.399 10:11:46 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:59.399 10:11:46 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:59.399 286008370 00:39:59.399 10:11:46 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:59.399 418524745 00:39:59.399 10:11:46 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=122679 00:39:59.399 10:11:46 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:59.399 10:11:46 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 122679 /var/tmp/bperf.sock 00:39:59.399 10:11:46 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 122679 ']' 00:39:59.399 10:11:46 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:59.399 10:11:46 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:59.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:59.399 10:11:46 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:59.399 10:11:46 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:59.399 10:11:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:59.399 [2024-12-05 10:11:46.884340] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:39:59.399 [2024-12-05 10:11:46.884522] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122679 ] 00:39:59.669 [2024-12-05 10:11:47.073637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:59.669 [2024-12-05 10:11:47.236581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:00.244 10:11:47 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:00.244 10:11:47 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:40:00.244 10:11:47 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:00.244 10:11:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:00.505 10:11:48 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:00.505 10:11:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:01.074 10:11:48 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:01.074 10:11:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:01.334 [2024-12-05 10:11:48.842065] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:01.334 nvme0n1 00:40:01.334 10:11:48 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:01.334 10:11:48 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:01.334 10:11:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:01.334 10:11:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:01.334 10:11:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:01.334 10:11:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:01.594 10:11:49 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:01.594 10:11:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:01.594 10:11:49 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:01.594 10:11:49 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:01.594 10:11:49 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:01.594 10:11:49 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:01.594 10:11:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:01.854 10:11:49 keyring_linux -- keyring/linux.sh@25 -- # sn=286008370 00:40:01.854 10:11:49 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:01.854 10:11:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:02.113 10:11:49 keyring_linux -- keyring/linux.sh@26 -- # [[ 286008370 == \2\8\6\0\0\8\3\7\0 ]] 00:40:02.113 10:11:49 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 286008370 00:40:02.113 10:11:49 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:02.113 10:11:49 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:02.113 Running I/O for 1 seconds... 00:40:03.050 9774.00 IOPS, 38.18 MiB/s 00:40:03.050 Latency(us) 00:40:03.050 [2024-12-05T10:11:50.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:03.050 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:03.050 nvme0n1 : 1.01 9774.24 38.18 0.00 0.00 13003.21 8043.05 20256.58 00:40:03.050 [2024-12-05T10:11:50.679Z] =================================================================================================================== 00:40:03.050 [2024-12-05T10:11:50.679Z] Total : 9774.24 38.18 0.00 0.00 13003.21 8043.05 20256.58 00:40:03.050 { 00:40:03.050 "results": [ 00:40:03.050 { 00:40:03.050 "job": "nvme0n1", 00:40:03.050 "core_mask": "0x2", 00:40:03.050 "workload": "randread", 00:40:03.050 "status": "finished", 00:40:03.050 "queue_depth": 128, 00:40:03.050 "io_size": 4096, 00:40:03.050 "runtime": 1.013071, 00:40:03.050 "iops": 9774.240897232277, 00:40:03.050 "mibps": 38.18062850481358, 00:40:03.050 "io_failed": 0, 00:40:03.050 "io_timeout": 0, 00:40:03.050 "avg_latency_us": 13003.208937037512, 00:40:03.050 "min_latency_us": 8043.054545454545, 00:40:03.050 "max_latency_us": 20256.581818181818 00:40:03.050 } 00:40:03.050 ], 00:40:03.050 "core_count": 1 00:40:03.050 } 00:40:03.050 10:11:50 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:03.050 10:11:50 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:03.309 10:11:50 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:03.309 10:11:50 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:03.309 10:11:50 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:03.309 10:11:50 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:03.309 10:11:50 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:03.568 10:11:50 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:03.828 10:11:51 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:03.828 10:11:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:03.828 10:11:51 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:03.828 10:11:51 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:03.828 10:11:51 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:40:03.828 10:11:51 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:03.828 10:11:51 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:03.828 10:11:51 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:03.828 10:11:51 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:03.828 10:11:51 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:03.828 10:11:51 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:03.828 10:11:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:04.088 [2024-12-05 10:11:51.496060] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:04.088 [2024-12-05 10:11:51.496265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002fb00 (107): Transport endpoint is not connected 00:40:04.088 [2024-12-05 10:11:51.497228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002fb00 (9): Bad file descriptor 00:40:04.088 [2024-12-05 10:11:51.498221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:40:04.088 [2024-12-05 10:11:51.498251] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:04.088 [2024-12-05 10:11:51.498265] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:04.088 [2024-12-05 10:11:51.498279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:40:04.088 2024/12/05 10:11:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:40:04.088 request: 00:40:04.088 { 00:40:04.088 "method": "bdev_nvme_attach_controller", 00:40:04.088 "params": { 00:40:04.088 "name": "nvme0", 00:40:04.088 "trtype": "tcp", 00:40:04.088 "traddr": "127.0.0.1", 00:40:04.088 "adrfam": "ipv4", 00:40:04.088 "trsvcid": "4420", 00:40:04.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:04.088 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:04.088 "prchk_reftag": false, 00:40:04.088 "prchk_guard": false, 00:40:04.088 "hdgst": false, 00:40:04.088 "ddgst": false, 00:40:04.088 "psk": ":spdk-test:key1", 00:40:04.088 "allow_unrecognized_csi": false 00:40:04.088 } 00:40:04.088 } 00:40:04.088 Got JSON-RPC error response 00:40:04.088 GoRPCClient: error on JSON-RPC call 00:40:04.088 10:11:51 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:40:04.088 10:11:51 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:04.088 10:11:51 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:04.088 10:11:51 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:04.088 10:11:51 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:04.088 10:11:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:04.088 10:11:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:04.088 10:11:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:04.088 10:11:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:04.088 10:11:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:04.088 10:11:51 keyring_linux -- keyring/linux.sh@33 -- # sn=286008370 00:40:04.088 10:11:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 286008370 00:40:04.088 1 links removed 00:40:04.088 10:11:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:04.088 10:11:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:04.088 10:11:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:04.088 10:11:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:04.088 10:11:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:04.088 10:11:51 keyring_linux -- keyring/linux.sh@33 -- # sn=418524745 00:40:04.088 10:11:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 418524745 00:40:04.088 1 links removed 00:40:04.088 10:11:51 keyring_linux -- keyring/linux.sh@41 -- # killprocess 122679 00:40:04.088 10:11:51 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 122679 ']' 00:40:04.088 10:11:51 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 122679 00:40:04.088 10:11:51 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:40:04.088 10:11:51 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:04.088 10:11:51 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122679 00:40:04.088 10:11:51 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:04.088 10:11:51 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:04.088 killing process with pid 122679 00:40:04.088 10:11:51 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122679' 00:40:04.088 10:11:51 keyring_linux -- common/autotest_common.sh@973 -- # kill 122679 00:40:04.088 Received shutdown signal, test time was about 1.000000 seconds 00:40:04.088 00:40:04.088 Latency(us) 00:40:04.088 [2024-12-05T10:11:51.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:04.088 [2024-12-05T10:11:51.717Z] =================================================================================================================== 00:40:04.088 [2024-12-05T10:11:51.717Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:04.088 10:11:51 keyring_linux -- common/autotest_common.sh@978 -- # wait 122679 00:40:05.026 10:11:52 keyring_linux -- keyring/linux.sh@42 -- # killprocess 122644 00:40:05.026 10:11:52 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 122644 ']' 00:40:05.026 10:11:52 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 122644 00:40:05.026 10:11:52 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:40:05.026 10:11:52 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:05.026 10:11:52 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122644 00:40:05.026 10:11:52 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:05.026 10:11:52 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:05.026 10:11:52 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122644' 00:40:05.026 killing process with pid 122644 00:40:05.026 10:11:52 keyring_linux -- common/autotest_common.sh@973 -- # kill 122644 00:40:05.026 10:11:52 keyring_linux -- common/autotest_common.sh@978 -- # wait 122644 00:40:06.929 ************************************ 00:40:06.929 END TEST keyring_linux 00:40:06.929 ************************************ 00:40:06.929 00:40:06.929 real 0m9.281s 00:40:06.929 user 0m16.243s 00:40:06.929 sys 0m1.927s 00:40:06.929 10:11:54 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:06.929 10:11:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:06.929 10:11:54 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:40:06.929 10:11:54 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:40:06.929 10:11:54 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:40:06.929 10:11:54 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:40:06.929 10:11:54 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:40:06.929 10:11:54 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:40:06.929 10:11:54 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:40:06.929 10:11:54 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:40:06.929 10:11:54 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:40:06.929 10:11:54 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:40:06.929 10:11:54 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:40:06.929 10:11:54 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:40:06.929 10:11:54 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:40:06.929 10:11:54 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:40:06.929 10:11:54 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:40:06.929 10:11:54 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:40:06.929 10:11:54 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:40:06.929 10:11:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:06.929 10:11:54 -- common/autotest_common.sh@10 -- # set +x 00:40:06.929 10:11:54 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:40:06.929 10:11:54 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:40:06.929 10:11:54 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:40:06.929 10:11:54 -- common/autotest_common.sh@10 -- # set +x 00:40:09.458 INFO: APP EXITING 00:40:09.458 INFO: killing all VMs 00:40:09.458 INFO: killing vhost app 00:40:09.458 INFO: EXIT DONE 00:40:09.715 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:09.715 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:40:09.715 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:40:10.649 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:10.649 Cleaning 00:40:10.649 Removing: /var/run/dpdk/spdk0/config 00:40:10.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:10.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:10.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:10.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:10.649 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:10.649 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:10.649 Removing: /var/run/dpdk/spdk1/config 00:40:10.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:10.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:10.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:10.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:10.649 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:10.649 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:10.649 Removing: /var/run/dpdk/spdk2/config 00:40:10.649 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:10.649 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:10.649 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:10.649 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:10.649 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:10.649 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:10.649 Removing: /var/run/dpdk/spdk3/config 00:40:10.649 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:10.649 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:10.649 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:10.649 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:10.649 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:10.649 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:10.649 Removing: /var/run/dpdk/spdk4/config 00:40:10.649 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:10.649 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:10.649 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:10.649 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:10.649 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:10.649 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:10.649 Removing: /dev/shm/nvmf_trace.0 00:40:10.649 Removing: /dev/shm/spdk_tgt_trace.pid59109 00:40:10.649 Removing: /var/run/dpdk/spdk0 00:40:10.649 Removing: /var/run/dpdk/spdk1 00:40:10.649 Removing: /var/run/dpdk/spdk2 00:40:10.649 Removing: /var/run/dpdk/spdk3 00:40:10.649 Removing: /var/run/dpdk/spdk4 00:40:10.649 Removing: /var/run/dpdk/spdk_pid100220 00:40:10.649 Removing: /var/run/dpdk/spdk_pid101640 00:40:10.650 Removing: /var/run/dpdk/spdk_pid102304 00:40:10.650 Removing: /var/run/dpdk/spdk_pid102313 00:40:10.650 Removing: /var/run/dpdk/spdk_pid104386 00:40:10.650 Removing: /var/run/dpdk/spdk_pid104477 00:40:10.650 Removing: /var/run/dpdk/spdk_pid104581 00:40:10.650 Removing: /var/run/dpdk/spdk_pid104678 00:40:10.650 Removing: /var/run/dpdk/spdk_pid104853 00:40:10.650 Removing: /var/run/dpdk/spdk_pid104946 00:40:10.650 Removing: /var/run/dpdk/spdk_pid105043 00:40:10.650 Removing: /var/run/dpdk/spdk_pid105133 00:40:10.650 Removing: /var/run/dpdk/spdk_pid105549 00:40:10.650 Removing: /var/run/dpdk/spdk_pid106331 00:40:10.650 Removing: /var/run/dpdk/spdk_pid107764 00:40:10.650 Removing: /var/run/dpdk/spdk_pid107972 00:40:10.650 Removing: /var/run/dpdk/spdk_pid108259 00:40:10.650 Removing: /var/run/dpdk/spdk_pid108822 00:40:10.650 Removing: /var/run/dpdk/spdk_pid109227 00:40:10.650 Removing: /var/run/dpdk/spdk_pid111913 00:40:10.650 Removing: /var/run/dpdk/spdk_pid111956 00:40:10.650 Removing: /var/run/dpdk/spdk_pid112328 00:40:10.650 Removing: /var/run/dpdk/spdk_pid112379 00:40:10.650 Removing: /var/run/dpdk/spdk_pid112808 00:40:10.650 Removing: /var/run/dpdk/spdk_pid113393 00:40:10.650 Removing: /var/run/dpdk/spdk_pid113822 00:40:10.650 Removing: /var/run/dpdk/spdk_pid114914 00:40:10.650 Removing: /var/run/dpdk/spdk_pid116002 00:40:10.650 Removing: /var/run/dpdk/spdk_pid116120 00:40:10.650 Removing: /var/run/dpdk/spdk_pid116186 00:40:10.650 Removing: /var/run/dpdk/spdk_pid117825 00:40:10.650 Removing: /var/run/dpdk/spdk_pid118188 00:40:10.650 Removing: /var/run/dpdk/spdk_pid118530 00:40:10.650 Removing: /var/run/dpdk/spdk_pid119124 00:40:10.650 Removing: /var/run/dpdk/spdk_pid119134 00:40:10.650 Removing: /var/run/dpdk/spdk_pid119576 00:40:10.650 Removing: /var/run/dpdk/spdk_pid119731 00:40:10.650 Removing: /var/run/dpdk/spdk_pid119887 00:40:10.650 Removing: /var/run/dpdk/spdk_pid119985 00:40:10.650 Removing: /var/run/dpdk/spdk_pid120138 00:40:10.650 Removing: /var/run/dpdk/spdk_pid120252 00:40:10.650 Removing: /var/run/dpdk/spdk_pid120994 00:40:10.650 Removing: /var/run/dpdk/spdk_pid121025 00:40:10.650 Removing: /var/run/dpdk/spdk_pid121057 00:40:10.650 Removing: /var/run/dpdk/spdk_pid121417 00:40:10.650 Removing: /var/run/dpdk/spdk_pid121448 00:40:10.909 Removing: /var/run/dpdk/spdk_pid121479 00:40:10.909 Removing: /var/run/dpdk/spdk_pid121951 00:40:10.909 Removing: /var/run/dpdk/spdk_pid121982 00:40:10.909 Removing: /var/run/dpdk/spdk_pid122462 00:40:10.909 Removing: /var/run/dpdk/spdk_pid122644 00:40:10.909 Removing: /var/run/dpdk/spdk_pid122679 00:40:10.909 Removing: /var/run/dpdk/spdk_pid58895 00:40:10.909 Removing: /var/run/dpdk/spdk_pid59109 00:40:10.909 Removing: /var/run/dpdk/spdk_pid59401 00:40:10.909 Removing: /var/run/dpdk/spdk_pid59506 00:40:10.909 Removing: /var/run/dpdk/spdk_pid59568 00:40:10.909 Removing: /var/run/dpdk/spdk_pid59691 00:40:10.909 Removing: /var/run/dpdk/spdk_pid59726 00:40:10.909 Removing: /var/run/dpdk/spdk_pid59885 00:40:10.909 Removing: /var/run/dpdk/spdk_pid60178 00:40:10.909 Removing: /var/run/dpdk/spdk_pid60374 00:40:10.909 Removing: /var/run/dpdk/spdk_pid60491 00:40:10.909 Removing: /var/run/dpdk/spdk_pid60603 00:40:10.909 Removing: /var/run/dpdk/spdk_pid60729 00:40:10.909 Removing: /var/run/dpdk/spdk_pid60763 00:40:10.909 Removing: /var/run/dpdk/spdk_pid60805 00:40:10.909 Removing: /var/run/dpdk/spdk_pid60870 00:40:10.909 Removing: /var/run/dpdk/spdk_pid60994 00:40:10.909 Removing: /var/run/dpdk/spdk_pid61650 00:40:10.909 Removing: /var/run/dpdk/spdk_pid61730 00:40:10.909 Removing: /var/run/dpdk/spdk_pid61812 00:40:10.909 Removing: /var/run/dpdk/spdk_pid61840 00:40:10.909 Removing: /var/run/dpdk/spdk_pid61982 00:40:10.909 Removing: /var/run/dpdk/spdk_pid62010 00:40:10.909 Removing: /var/run/dpdk/spdk_pid62148 00:40:10.909 Removing: /var/run/dpdk/spdk_pid62177 00:40:10.909 Removing: /var/run/dpdk/spdk_pid62249 00:40:10.909 Removing: /var/run/dpdk/spdk_pid62279 00:40:10.909 Removing: /var/run/dpdk/spdk_pid62349 00:40:10.909 Removing: /var/run/dpdk/spdk_pid62379 00:40:10.909 Removing: /var/run/dpdk/spdk_pid62593 00:40:10.909 Removing: /var/run/dpdk/spdk_pid62629 00:40:10.909 Removing: /var/run/dpdk/spdk_pid62718 00:40:10.909 Removing: /var/run/dpdk/spdk_pid63231 00:40:10.909 Removing: /var/run/dpdk/spdk_pid63654 00:40:10.909 Removing: /var/run/dpdk/spdk_pid66145 00:40:10.909 Removing: /var/run/dpdk/spdk_pid66192 00:40:10.909 Removing: /var/run/dpdk/spdk_pid66570 00:40:10.909 Removing: /var/run/dpdk/spdk_pid66628 00:40:10.909 Removing: /var/run/dpdk/spdk_pid67060 00:40:10.909 Removing: /var/run/dpdk/spdk_pid67658 00:40:10.909 Removing: /var/run/dpdk/spdk_pid68117 00:40:10.909 Removing: /var/run/dpdk/spdk_pid69244 00:40:10.909 Removing: /var/run/dpdk/spdk_pid70377 00:40:10.909 Removing: /var/run/dpdk/spdk_pid70507 00:40:10.909 Removing: /var/run/dpdk/spdk_pid70587 00:40:10.909 Removing: /var/run/dpdk/spdk_pid72282 00:40:10.909 Removing: /var/run/dpdk/spdk_pid72690 00:40:10.909 Removing: /var/run/dpdk/spdk_pid80127 00:40:10.909 Removing: /var/run/dpdk/spdk_pid80581 00:40:10.909 Removing: /var/run/dpdk/spdk_pid81253 00:40:10.909 Removing: /var/run/dpdk/spdk_pid81778 00:40:10.909 Removing: /var/run/dpdk/spdk_pid81792 00:40:10.909 Removing: /var/run/dpdk/spdk_pid81847 00:40:10.909 Removing: /var/run/dpdk/spdk_pid81906 00:40:10.909 Removing: /var/run/dpdk/spdk_pid81974 00:40:10.909 Removing: /var/run/dpdk/spdk_pid82013 00:40:10.909 Removing: /var/run/dpdk/spdk_pid82022 00:40:10.909 Removing: /var/run/dpdk/spdk_pid82053 00:40:10.909 Removing: /var/run/dpdk/spdk_pid82093 00:40:10.909 Removing: /var/run/dpdk/spdk_pid82096 00:40:10.909 Removing: /var/run/dpdk/spdk_pid82161 00:40:10.909 Removing: /var/run/dpdk/spdk_pid82220 00:40:10.909 Removing: /var/run/dpdk/spdk_pid82277 00:40:10.909 Removing: /var/run/dpdk/spdk_pid82325 00:40:10.909 Removing: /var/run/dpdk/spdk_pid82335 00:40:10.909 Removing: /var/run/dpdk/spdk_pid82360 00:40:10.909 Removing: /var/run/dpdk/spdk_pid82688 00:40:10.909 Removing: /var/run/dpdk/spdk_pid82860 00:40:10.909 Removing: /var/run/dpdk/spdk_pid83125 00:40:10.909 Removing: /var/run/dpdk/spdk_pid88804 00:40:10.909 Removing: /var/run/dpdk/spdk_pid89344 00:40:10.909 Removing: /var/run/dpdk/spdk_pid89458 00:40:10.909 Removing: /var/run/dpdk/spdk_pid89629 00:40:10.909 Removing: /var/run/dpdk/spdk_pid89689 00:40:10.909 Removing: /var/run/dpdk/spdk_pid89748 00:40:11.168 Removing: /var/run/dpdk/spdk_pid89813 00:40:11.168 Removing: /var/run/dpdk/spdk_pid90003 00:40:11.168 Removing: /var/run/dpdk/spdk_pid90176 00:40:11.168 Removing: /var/run/dpdk/spdk_pid90492 00:40:11.168 Removing: /var/run/dpdk/spdk_pid90635 00:40:11.168 Removing: /var/run/dpdk/spdk_pid90908 00:40:11.168 Removing: /var/run/dpdk/spdk_pid91048 00:40:11.168 Removing: /var/run/dpdk/spdk_pid91200 00:40:11.168 Removing: /var/run/dpdk/spdk_pid91603 00:40:11.168 Removing: /var/run/dpdk/spdk_pid92093 00:40:11.168 Removing: /var/run/dpdk/spdk_pid92094 00:40:11.168 Removing: /var/run/dpdk/spdk_pid92095 00:40:11.168 Removing: /var/run/dpdk/spdk_pid92395 00:40:11.168 Removing: /var/run/dpdk/spdk_pid92704 00:40:11.168 Removing: /var/run/dpdk/spdk_pid92714 00:40:11.168 Removing: /var/run/dpdk/spdk_pid95111 00:40:11.168 Removing: /var/run/dpdk/spdk_pid95561 00:40:11.168 Removing: /var/run/dpdk/spdk_pid95942 00:40:11.168 Removing: /var/run/dpdk/spdk_pid96573 00:40:11.168 Removing: /var/run/dpdk/spdk_pid96587 00:40:11.168 Removing: /var/run/dpdk/spdk_pid96988 00:40:11.168 Removing: /var/run/dpdk/spdk_pid97007 00:40:11.168 Removing: /var/run/dpdk/spdk_pid97029 00:40:11.168 Removing: /var/run/dpdk/spdk_pid97063 00:40:11.168 Removing: /var/run/dpdk/spdk_pid97069 00:40:11.168 Removing: /var/run/dpdk/spdk_pid97219 00:40:11.168 Removing: /var/run/dpdk/spdk_pid97222 00:40:11.168 Removing: /var/run/dpdk/spdk_pid97330 00:40:11.168 Removing: /var/run/dpdk/spdk_pid97344 00:40:11.168 Removing: /var/run/dpdk/spdk_pid97448 00:40:11.168 Removing: /var/run/dpdk/spdk_pid97451 00:40:11.168 Removing: /var/run/dpdk/spdk_pid97974 00:40:11.168 Removing: /var/run/dpdk/spdk_pid98017 00:40:11.168 Removing: /var/run/dpdk/spdk_pid98163 00:40:11.168 Removing: /var/run/dpdk/spdk_pid98277 00:40:11.168 Removing: /var/run/dpdk/spdk_pid98744 00:40:11.168 Removing: /var/run/dpdk/spdk_pid98992 00:40:11.168 Removing: /var/run/dpdk/spdk_pid99562 00:40:11.168 Clean 00:40:11.168 10:11:58 -- common/autotest_common.sh@1453 -- # return 0 00:40:11.168 10:11:58 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:40:11.168 10:11:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:11.168 10:11:58 -- common/autotest_common.sh@10 -- # set +x 00:40:11.168 10:11:58 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:40:11.168 10:11:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:11.168 10:11:58 -- common/autotest_common.sh@10 -- # set +x 00:40:11.427 10:11:58 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:40:11.427 10:11:58 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:40:11.427 10:11:58 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:40:11.427 10:11:58 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:40:11.427 10:11:58 -- spdk/autotest.sh@398 -- # hostname 00:40:11.427 10:11:58 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:40:11.427 geninfo: WARNING: invalid characters removed from testname! 00:40:37.965 10:12:22 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:38.223 10:12:25 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:40.755 10:12:28 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:43.285 10:12:30 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:45.816 10:12:33 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:48.342 10:12:35 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:50.871 10:12:38 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:50.871 10:12:38 -- spdk/autorun.sh@1 -- $ timing_finish 00:40:50.871 10:12:38 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:40:50.871 10:12:38 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:50.871 10:12:38 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:50.871 10:12:38 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:40:50.871 + [[ -n 5267 ]] 00:40:50.871 + sudo kill 5267 00:40:50.881 [Pipeline] } 00:40:50.901 [Pipeline] // timeout 00:40:50.906 [Pipeline] } 00:40:50.924 [Pipeline] // stage 00:40:50.930 [Pipeline] } 00:40:50.946 [Pipeline] // catchError 00:40:50.956 [Pipeline] stage 00:40:50.958 [Pipeline] { (Stop VM) 00:40:50.970 [Pipeline] sh 00:40:51.300 + vagrant halt 00:40:54.591 ==> default: Halting domain... 00:41:01.169 [Pipeline] sh 00:41:01.456 + vagrant destroy -f 00:41:03.989 ==> default: Removing domain... 00:41:04.260 [Pipeline] sh 00:41:04.543 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:41:04.553 [Pipeline] } 00:41:04.565 [Pipeline] // stage 00:41:04.570 [Pipeline] } 00:41:04.582 [Pipeline] // dir 00:41:04.588 [Pipeline] } 00:41:04.600 [Pipeline] // wrap 00:41:04.607 [Pipeline] } 00:41:04.620 [Pipeline] // catchError 00:41:04.630 [Pipeline] stage 00:41:04.632 [Pipeline] { (Epilogue) 00:41:04.646 [Pipeline] sh 00:41:04.928 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:10.207 [Pipeline] catchError 00:41:10.210 [Pipeline] { 00:41:10.223 [Pipeline] sh 00:41:10.504 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:10.764 Artifacts sizes are good 00:41:10.773 [Pipeline] } 00:41:10.787 [Pipeline] // catchError 00:41:10.798 [Pipeline] archiveArtifacts 00:41:10.806 Archiving artifacts 00:41:10.968 [Pipeline] cleanWs 00:41:10.985 [WS-CLEANUP] Deleting project workspace... 00:41:10.985 [WS-CLEANUP] Deferred wipeout is used... 00:41:11.009 [WS-CLEANUP] done 00:41:11.011 [Pipeline] } 00:41:11.027 [Pipeline] // stage 00:41:11.032 [Pipeline] } 00:41:11.048 [Pipeline] // node 00:41:11.054 [Pipeline] End of Pipeline 00:41:11.092 Finished: SUCCESS